AI newscasting can increase distrust of media – UP professor

Professor Rachel Khan of the University of the Philippines Diliman-College of Mass Communication said that the AI sportscasting experiment serves to further black out the need to discriminate the sources of factual information for audiences, particularly the vast audience presently online.
Gerd Altmann / CC0

MANILA, Philippines — The recent experiment of Philippine broadcasting giant GMA Network Inc. of using artificial intelligence to deliver sports news using avatars called AI sportscasters is not a “good use” of the emerging technology in journalism, according to a journalism scholar.

Professor Rachel Khan of the University of the Philippines Diliman-College of Mass Communication said that the AI sportscasting experiment serves to further black out the need to discriminate the sources of factual information for audiences, particularly the vast audience presently online.

“The long-term impact is that … it’s going to make people start doubting, or even increasing their distrust of the media,” Khan said in an interview with “The Chiefs” aired on Cignal TV’s One News last Saturday.

“That’s the bad side or the possible consequence or possible backfiring of this. Of course, it will take time to tell,” she added.

For Khan, having such an experiment is sending a wrong signal, as she noted that prestigious global news organizations have not latched on to the AI newscasting novelty.

“The technology has been there, it’s not that it’s new. But why are the big companies not using it? Why is Reuters not using it? Why is Bloomberg not using it?” she said.

She agreed with “The Chiefs” that the AI sportscasters are like virtual puppets that are apparently gaining favor in authoritarian societies.

“But you have small channels in India using it, China is using it much. Soon, probably, they will use it in the other (special administrative regions) like Hong Kong or Macau, places where they want to control media,” Khan said, noting that AI newscasters will not question the news inputs they are fed then spew out.

“You don’t have to control a puppet, because it’s already controlled, it already has strings,” she added.

Khan advised news organizations to consider their social responsibility as they explore the use of AI in their assumed public duty of informing their publics.

“It’s something that I don’t think we can expect from AI; however, this should be a concern (among) news organizations,” she said, referring to having social responsibility concerns.

“It’s something that should be in AI policy. Concern for the audiences necessarily means what do you use AI for. So you only use it when it can enhance reporting, when it can enhance transparency,” she added.

The UP professor emphasized that if a news organization really wants to serve its audience, then it should use AI precisely so it can do more in-depth reporting and expand coverage.

“The human reach is limited, but with AI, we can reach more things, you can gather more data at a faster speed. For me, that’s where AI should be used for – in fact-checking, we can cover more bases in terms of monitoring disinformation channels,” she said, reiterating that AI policy should focus on the right use of the technology.

“Like any technology that is introduced, it can be used for the good or the bad,” she added.

Show comments