dc.contributor.author |
korichi, Safa batoul |
|
dc.contributor.author |
aimene, Karim |
|
dc.date.accessioned |
2022-05-29T11:12:42Z |
|
dc.date.available |
2022-05-29T11:12:42Z |
|
dc.date.issued |
2021 |
|
dc.identifier.uri |
https://dspace.univ-ghardaia.edu.dz/xmlui/handle/123456789/1055 |
|
dc.description.abstract |
Artificial Intelligence (AI) is currently moving increasingly towards multimodal learning which involve build
system that can process information from multiple sources, such as text, images or audio. Image captioning
is one of the main visual-linguistic tasks that requires generating captions to a specific image. The challenge
is to create a unified Deep Learning (DL) model, suitable to describe an image in a correct sentence. To do
so, we need to understand the proper way to visualize the text in a certain space. We used the new term of
Transformer that brings a new concept into a sequence to sequence mechanism, we also include the power
of modern GPU in processing data in an efficient and faster manner. In this path, we have experimented
with a Transformer-based approach and applied it to the image captioning problem using MS COCO dataset. |
EN_en |
dc.publisher |
université Ghardaia |
EN_en |
dc.subject |
Multimodal Learning, Image captioning, Deep Learning (DL), Transformer, Sequence to sequence, MS-COCO |
EN_en |
dc.title |
Automatic Image Caption Generation: study and implementation |
EN_en |
dc.type |
Thesis |
EN_en |