I’m currently exploring what these might mean in terms of translation, so hopefully there’s more to come on that soon…
Do you think we’re close to developing the right conditions for this anytime soon?
Mentioning EEG, ECG and so on as the sensors required to detect cognitive states would probably make eyes roll. We can’t deploy those kinds of sensors in the normal world of work.
However, I believe that other sensors are already built into our regular work environments, for example, cameras for eye tracking and pulse monitors, the latter of which many of us wear in our smart watches latvia mobile database or fitness devices already. So, some of the technology required to tackle augmentation is there.
The problem is that they may not be very accurate, which could lead to annoying behavior by the supporting technology. It’s known that it’s very challenging to tune these systems and that they would ideally be personalized so I think we have some way to go before we have implementable systems.
This type of “monitoring” also introduces some very thorny ethical questions, which will need to be very seriously considered too.
This article is the first in a series that takes a deeper look at the research presented at the 2022 NeTTT conference. You can find the rest here:
Last month, Intento published its much-anticipated State of Machine Translation report for 2022. This year’s report was done in partnership with e2f, a language solutions company specializing in AI and data.