Skip to main content

Write a PREreview

A Comparative Survey of CNN-LSTM Architectures for Image Captioning

Posted
Server
Preprints.org
DOI
10.20944/preprints202512.1301.v1

Image captioning, the task of automatically generating textual descriptions for images, lies at the intersection of computer vision and natural language processing. Architectures combining Convolutional Neural Networks (CNNs) for visual feature extraction and Long Short-Term Memory (LSTM) networks for language generation have become a dominant paradigm. This survey provides a comprehensive overview of fifteen influential papers employing these CNN-LSTM frameworks, summarizing their core contributions, architectural variations (including attention mechanisms and encoder-decoder designs), training strategies, and performance on benchmark datasets. A detailed comparative analysis, presented in tabular format, evaluates these works by detailing their technical approaches, key contributions or advantages, and identified limitations. Based on this analysis, we identify key evolutionary trends in CNN-LSTM models, discuss prevailing challenges such as generating human-like and contextually rich captions, and highlight promising future research directions, including deeper reasoning, improved evaluation, and the integration of newer architectures.

You can write a PREreview of A Comparative Survey of CNN-LSTM Architectures for Image Captioning. A PREreview is a review of a preprint and can vary from a few sentences to a lengthy report, similar to a journal-organized peer-review report.

Before you start

We will ask you to log in with your ORCID iD. If you don’t have an iD, you can create one.

What is an ORCID iD?

An ORCID iD is a unique identifier that distinguishes you from everyone with the same or similar name.

Start now