It’s an issue in two primary cases that I am aware of:
a) multi-lingual areas, as you noted!
…
when one name is in e.g. French and the other in e.g. German, but the software does not have metadata marking which is which, the resulting audio can be fairly mangled for at least one of them. It’s not great, though locals can get used to it. Some software does a better job at auto-detecting languages than others, as well.
But it does not help when we have, for example, streets in Canada with the text “Avenue Nerville Avenue” in the name tag … how can the text-to-speech software even detect which language part is which? Is the first “Avenue” English or French? What about the second one?
b) when the language of the navigation aide does not match the language of the locality.
A few years back I had the experience of driving about with a friend who had a satnav in their car set to English (which they spoke) as we drove through French and German language towns. The names were absolutely butchered, to the point I could barely understand them even though I knew some of the streets by name, as the nav tried vocalizing the place names as if they were English.
As the name tag does not include language information in OSM, text-to-speech vocalizations of navigation directions using those name values relies on a mix of heuristics (“people speak XXX in this country…”) and language auto-detection. Neither are exactly perfect, nor available in every nav device/service.
Having language metadata for the name(s) gives a more reliable and simpler path to “vocalize this name with the correct pronunciation”, particularly when multiple languages are involved.
I find the audio use case helpful in reminding me that OSM isn’t a map but a database and that there are a lot of different ways OSM data gets used.