02096nas a2200193 4500008004100000020002200041245008900063210006900152260004400221300001400265520137900279100002201658700001901680700002201699700002301721700001701744700002501761856011601786 2019 eng d a978-3-030-05710-700aExploring the Impact of Training Data Bias on Automatic Generation of Video Captions0 aExploring the Impact of Training Data Bias on Automatic Generati aChambSpringer International Publishing a178–1903 a
A major issue in machine learning is availability of training data. While this historically referred to the availability of a sufficient volume of training data, recently this has shifted to the availability of sufficient unbiased training data. In this paper we focus on the effect of training data bias on an emerging multimedia application, the automatic captioning of short video clips. We use subsets of the same training data to generate different models for video captioning using the same machine learning technique and we evaluate the performances of different training data subsets using a well-known video caption benchmark, TRECVid. We train using the MSR-VTT video-caption pairs and we prune this to reduce and make the set of captions describing a video more homogeneously similar, or more diverse, or we prune randomly. We then assess the effectiveness of caption-generating trained with these variations using automatic metrics as well as direct assessment by human assessors. Our findings are preliminary and show that randomly pruning captions from the training data yields the worst performance and that pruning to make the data more homogeneous, or diverse, does improve performance slightly when compared to random. Our work points to the need for more training data, both more video clips but, more importantly, more captions for those videos.
1 aSmeaton, Alan, F.1 aGraham, Yvette1 aMcGuinness, Kevin1 aO'Connor, Noel, E.1 aQuinn, Seán1 aSanchez, Eric, Arazo uhttps://daselab.cs.ksu.edu/publications/exploring-impact-training-data-bias-automatic-generation-video-captions