![]() ![]() ![]() Years ago, software companies realised that it was necessary to thoroughly test their products for technical problems before they were released – a process now known in the industry as quality assurance. Finally, and more ambitiously, leading AI developers could establish an independent review board that would authorise whether and how to release language models, prioritising access to independent researchers who can help assess risks and suggest mitigation strategies, rather than speeding toward commercialisation.Īfter all, because language models can be adapted to so many downstream applications, no single company could foresee all the potential risks (or benefits). A second would be age restrictions and age-verification systems to make clear that pupils should not access the software. Perhaps all text generated by commercially available language models could be placed in an independent repository to allow for plagiarism detection. There are a number of obvious possibilities. What could companies do that would promote the socially beneficial uses and deter or prevent the obviously negative uses, such as using a text generator to cheat in school? In this case, that would mean companies establishing a shared framework for the responsible development, deployment or release of language models to mitigate their harmful effects, especially in the hands of adversarial users. ![]() In law and medicine, standards were a product of deliberate decisions by leading practitioners to adopt a form of self-regulation. There are scant legal requirements for beneficial uses of technology. Unlike in law or medicine, there are no widely accepted standards in technology for what counts as responsible behaviour. In this situation, the solution lies in getting technology companies and the community of AI developers to embrace an ethic of responsibility. ‘A well written and unique English essay on Hamlet is now just a few clicks away.’ Photograph: Max Nash/AP While the government is already intervening (albeit slowly) to address the potential misuse of AI in various domains – for example, in hiring staff, or facial recognition – there is much less understanding of language models and how their potential harms can be addressed. This also isn’t a problem that lends itself to government regulation. It’s almost impossible to prevent kids from accessing these new technologies, and schools will be outmatched when it comes to detecting their use. While it’s important that parents and teachers know about these new tools for cheating, there’s not much they can do about it. For a high school pupil, a well written and unique English essay on Hamlet or short argument about the causes of the first world war is now just a few clicks away. Another released an app for smartphones with an eyebrow-raising sample prompt for a high schooler: “Write an article about the themes of Macbeth.” We won’t name any of those companies here – no need to make it easier for cheaters – but they are easy to find, and they often cost nothing to use, at least for now. One company’s stated mission is to employ cutting edge-AI technology in order to make writing painless. In the past six months, easy-to-use commercial versions of these powerful AI tools have proliferated, many of them without the barest of limits or restrictions. OpenAI now has a comprehensive policy focused on permissible uses and content moderation.īut as the race to commercialise the technology has kicked off, those responsible precautions have not been adopted across the industry. OpenAI, the first company to develop such models, restricted their external use and did not release the source code of its most recent model as it was so worried about potential abuse. Initially developed by AI researchers just a few years ago, they were treated with caution and concern. These models are capable of producing all kinds of outputs – essays, blogposts, poetry, op-eds, lyrics and even computer code. Give the model a prompt, hit return, and you get back full paragraphs of unique text. The breakthrough technology is a new kind of machine learning system called a large language model. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |