According to Brent Hecht, researchers should disclose any potential negative consequences that may arise in society as a result of artificial intelligence. This is mainly on matters privacy and data use and management. Hecht is a computer scientist and chair of the FCA (Future of Computing Academy). He was part of the team of young leaders that made the policy proposal in March. He believes that without such standards, computer scientists might continue producing software and machines without taking into consideration what effects they have on the society. Hecht’s peer-review proposal says that when a paper is handed to a peer reviewer, in addition to evaluating the intellectual rigor within, the author’s claims of impact should be put into consideration.
Basically, the process should also include trying to identify unintended uses and expected side effects. However, Hecht and his team do not propose social impact to be the foundation on which publications get rejected. Their recommendation is that authors should disclose all the potential negative impacts that arise from the use of the new technology. He thinks that for successful prediction, social scientists should be involved. They should be included in the execution process. Google has already responded to the proposal. Publicly disclosing the negative impacts of new technology will create new pathways of solving emerging problems that would have otherwise gone unsolved.
Research source link:
