Skip to content

Issue with BLEU Score Calculation in train.py and Suggested Fix #36

@Xie-yx

Description

@Xie-yx

Hi @hkproj,

I found an issue with the BLEU score calculation in train.py.
The torchmetrics.BLEUScore() function expects a list of reference sentences but receives a single sentence instead.
Here is the original code:

expected = []
expected.append(target_text)

bleu = metric(predicted, expected)

The corrected version is as follows:

expected = []
expected_list = []

expected.append(target_text)
expected_list.append([target_text])

bleu = metric(predicted, expected_list)

Thank you for your code and videos! They help me a lot.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions