How Smart OpenAI is?
Scientist Elon Musk has announced the creation of frightening and very clever artificial intelligence (AI). Uniquely, he chose to keep the OpenAI and refused to release it to the public. GPT-2 OpenAI is designed to write like a human and is a leap forward that can write horribly convincing texts. OpenAI is ‘trained’ by analyzing eight million web pages and able to write large tracts based on ‘prompts’ written by real humans. But the mind of the machine will not be released in its complete form because it risks being used for ‘evil purposes’. Like producing fake news, imitating people online, automating spam production or churning out ‘abusive or fake content’ then posting it on social media. “Because of our concern about malicious application technology, we did not release a trained model. As an experiment in responsible disclosure, we released a much smaller model for researchers to experiment with, as well as technical papers,” OpenAI wrote.
Artificial intelligence (OpenAI) is clearly frightening for journalists because it can do our work quite effectively. This system can also be used to create ‘bot’ customer service for online shopping, place more people out of work, and translate languages. OpenAI marks a future where it is impossible to distinguish between truth and misinformation online. “Synthetic imagery, audio and video imply that technology reduces the cost of creating fake content and waged disinformation campaigns.
The wider community needs to be more skeptical of the texts they find online, just as the phenomenon of ‘deep fakes’ demands more skepticism about images,” OpenAI explained. As is known, lately the evil actors, among whom are political spread in cyberspace. They often use robots, fake accounts and special teams to trap people with hateful comments or blemishes that make them afraid to speak or are hard to hear or trust. “We must consider how research into the generation of synthetic images, video, audio and text can be further combined to unlock new capabilities that have not been anticipated for these actors, and must try to create better technical and non-technical countermeasures,” said The OpenAI.
Microsoft Big Hands
Microsoft announced it would invest US $ 1 billion or around Rp 14 trillion in OpenAI, a San Francisco-based artificial intelligence (AI) research company.One of the leading figures behind OpenAI is Elon Musk and the company also has the support of other prominent figures such as LinkedIn co-founder Reid Hoffman and former Y Combinator president Sam Altman. OpenAI CTO, Greg Brockman, said this investment will support the development of artificial general intelligence (AGI). AGI is an AI that has the ability to learn work that humans can do.
“AI is one of the most transformative technologies at the moment and has the potential to help solve many pressing challenges in the world,” Microsoft CEO Satya Nadella was quoted as saying by Venture Beat, Wednesday (24/7/2019). Researchers at OpenAI recently reported an analysis that showed that from 2012 to 2018, the amount of computing used to train AI grew more than 300,000 times. For example, the OpenAI Five project was competed against professional Dota 2 players last summer. On the Google Cloud Platform, OpenAI plays Dota 2 which is equivalent to 180 years every day on 256 units of the Nvidia Tesla P100 graphics card and 128,000 core processors.
Furthermore, OpenAI has contributed to open source tools such as Gym. It is a tool for testing and comparing learning algorithms. Other tools included include CoinRun, Neural MMO, Spinning Up, Sparse Transformers and MuseNet. Sparse Transformers are able to provide predictions on long text, image and audio sequences. While MuseNet can produce new songs with a duration of four minutes with 10 different instruments in various genres and styles.
Some scientists, such as Stephen Hawking and Stuart Russell, believed that if advanced AI someday gains the ability to re-design itself at an ever-increasing rate, an unstoppable “intelligence explosion” could lead to human extinction. Musk characterizes AI as humanity’s “biggest existential threat.” OpenAI’s founders structured it as a non-profit so that they could focus its research on creating a positive long-term human impact.
OpenAI states that “it’s hard to fathom how much human-level AI could benefit society,” and that it is equally difficult to comprehend “how much it could damage society if built or used incorrectly”.Research on safety cannot safely be postponed: “because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach.” OpenAI states that AI “should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible…”, and which sentiment has been expressed elsewhere in reference to a potentially enormous class of AI-enabled products: “Are we really willing to let our society be infiltrated by autonomous software and hardware agents whose details of operation are known only to a select few? Of course not.” Co-chair Sam Altman expects the decades-long project to surpass human intelligence.
Vishal Sikka, former CEO of Infosys, stated that an “openness” where the endeavor would “produce results generally in the greater interest of humanity” was a fundamental requirement for his support, and that OpenAI “aligns very nicely with our long-held values” and their “endeavor to do purposeful work”. Cade Metz of Wired suggests that corporations such as Amazon may be motivated by a desire to use open-source software and data to level the playing field against corporations such as Google and Facebook that own enormous supplies of proprietary data. Altman states that Y Combinator companies will share their data with OpenAI.
Musk posed the question: “what is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity.” Musk acknowledged that “there is always some risk that in actually trying to advance (friendly) AI we may create the thing we are concerned about”; nonetheless, the best defense is “to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.