Comments
Write a commentNo comments have been published yet.
This review is the result of a virtual, live-streamed journal club organized and hosted by PREreview and the journal Current Research in Neurobiology (CRNEUR) as part of a community-based review pilot (you can read more about the pilot here). The discussion was joined by 13 people in total, and the event organizing team. We thank all participants who contributed to the discussion and made it possible for us to provide feedback to this preprint.
This study measured the effect of age on implicit auditory learning, building on methods developed by the authors’ group. Human subjects performed a task, using an online testing system, that required them to detect repeating regularities in a sequence of tones with otherwise random frequency. Some target sequences were recurring across multiple sessions, and performance on these test stimuli was compared to that for new target sequences. This study complemented other previous work on aging and auditory memory by using a task that had no linguistic component.
Results across all subjects replicated previous findings by the authors’ group. Subjects were not aware of the recurring targets but had faster reaction times relative to new targets, stable for up to six months after the initial exposure. When grouped by age, older subjects took longer to learn and showed a smaller reaction time advantage than younger subjects, but this advantage persisted the same amount for both groups. The authors conclude that for this task older subjects have deficits in implicit learning the recurring targets but were able to retain the memories as long as younger subjects. This result for implicit learning of tone sequences contrasts with studies of verbal tasks, which have reported a faster rate of forgetting in older subjects.
Below we list major and minor issues that were discussed by participants of the journal club, and, where possible, we provide suggestions on how to address those issues.
List of major issues and feedback:
The authors comment that the results show effects on short term memory, based on longer reaction times in older listeners. While the report mentions corrected reaction times to account for slower times overall in older listeners, the effects are seemingly small. A concern is that these effects do not reflect issues with short term memory, as opposed to other cognitive factors (e.g., slower time to acknowledge a novel stimulus). Indeed, it is difficult to assess whether RT reflects memory formation or memory retrieval. Further explanation of the difference between active recall (for familiarity recognition) and implicit recall (for reaction time) would give strength to the main result. It is interesting that the authors show the effect of implicit learning being present in the long-term in both young and old listeners. And the results are a nice follow-up in that respect to their previous work. Ultimately it is uncertain whether the differences in reaction time reflect short term memory differences, rather than a different cognitive dimension (e.g., slower performance on a novel/more complex task, which has already been demonstrated to change with aging) - effects in this domain are also marginal.
The justification for using the ApMEM task was to get away from the effects of language/prior experience with language tasks. The fact that years of musical training was not a significant confound is good support for this - prior training to recognize tone patterns didn’t affect performance here.
However, it would be helpful to provide a clear description of the contrast of the current approach against previous studies of aging and auditory memory. Is this specifically in contrast to studies involving language? Nonsense phonemes?
The justification for the exclusion of poor performers (4 young and 1 old) is understandable, but this may have biased the results as it could have enhanced the difference between the two groups. Can the authors please specify what happens when these poor performers are included?
Education is mentioned as being measured but there does not appear to be a clear report of any differences or findings. Presumably the authors felt that this variable could have been relevant, so it would be important to know what impact this factor had, if any.
It is not clear the authors have adequately addressed potential effects of peripheral hearing loss. For example, could moderate hearing loss require greater effort, reducing resources available to form memories? (e.g., FUEL model). Could hearing loss differences have contributed to variability in stimulus encoding influencing subsequent memory?
It is not clear that the chosen, somewhat traditional, auditory memory model represents the most appropriate characterisation of auditory memory processes. This would be more defensible if alternative models were explored in the manuscript.
The comparison between the two groups (old and young) could potentially be misleading. It would be good to know the results of aging between and within these groups, i.e., what happens to someone aging from 60-70? This has potential implications for interpretation of the results as it may be the case there is a continuous linear change, or instead there may be stepped changes that occur as part of particular aging processes. One way to address this would be to provide a within-group regression analysis rather than discarding these data.
List of minor issues and feedback:
“Ageing” in the title suggests a continuous process, such that some might expect a longitudinal study (i.e. tracking changes in the same individuals with age) or else a study using age as a continuous variable (i.e. comparisons within the old group). “Age” may be a more appropriate substitute term.
Details around the data collection procedure itself are quite extensive and generally clear. However, the exact process of RT correction and its importance should be clarified in the methods and results sections. For example, is the baseline RT that was subtracted stable across sessions and simply accounts for between subject differences? (see top of page 7).
Within the methods, it would be appropriate to include a brief summary of the headphone / binaural hearing check procedure.
It is not presently clear if normality and variance homogeneity were formally tested when choosing between parametric and non-parametric methods. The authors should clarify this in the text.
A number of details of the model are provided but these appear to be insufficient to reproduce exactly what was simulated here, in the absence of source code. It would be useful to provide more exhaustive detail about the model itself, and/or a reference to the source code.
With regard to figures:
In general, acronyms are often not (re)introduced where they are used within the figure legends or the figures themselves, even though readers are likely to require reminders of these. For example, within Figure 1B, “CRT” is not defined clearly as “choice reaction task”. Similarly, “MCC” in Figure 5A is a potentially confusing term to abbreviate, and not in common usage. Depending on editorial policy, it may be advisable to clearly redefine acronyms within the figure legends where they occur.
Figure 1C: The legend for this figure is quite dense and difficult to understand without several re-reads. The nomenclature used in the diagrams here is also not fully consistent with the legend text. If reworked, this would be an opportunity to clarify the functioning and features of the model.
Figure 2: The results currently skip panel A and introduce panel B first, which challenges the notion that panel A is actually needed.
Figure 3: Given the importance of faster LTM decay in the “Old” group, panel C could benefit from the addition of a zoomed-in plot of this difference, similar to the zoomed-in subpanel showing initial buffer duration that is currently included.
Figure 5A: Further to the feedback on acronyms above, although the authors outline in the methods what MCC is, it would be helpful to include a description of the Matthews Correlation Coefficient within the figure legend to enable the reader to understand clearly what it means and how it scales. This is especially important given that the results show a significant difference between the “Young” and “Old” participant groups.
Further to the feedback on Figure 5A above, within the results section, though MCC is defined, the authors should clarify the scaling of this metric. Additionally, there is a notable typographical error here (“Mathew” should read “Matthews”).
Within the discussion, the limitations related to variability across participants and time (e.g., motivation and vigilance) are not yet fully addressed.
It is advisable that precision of the reported values be kept consistent and appropriate; for example, on page 6, reporting to 6 decimal places is excessive.
We thank the authors of the preprint for posting their work as such and for agreeing to participate in this pilot. We also thank all participants of the live-preprint journal club for their time and for engaging in the lively discussion that generated this review.
The author declares that they have no competing interests.
No comments have been published yet.