Hi @hkproj,
I found an issue with the BLEU score calculation in train.py.
The torchmetrics.BLEUScore() function expects a list of reference sentences but receives a single sentence instead.
Here is the original code:
expected = []
expected.append(target_text)
bleu = metric(predicted, expected)
The corrected version is as follows:
expected = []
expected_list = []
expected.append(target_text)
expected_list.append([target_text])
bleu = metric(predicted, expected_list)
Thank you for your code and videos! They help me a lot.