The South Korean government has fined the country’s three mobile network operators for making exaggerated claims about the capabilities of their 5G services. The Korean Fair Trade Commission found that SK Telecom, KT Corp, and LG U+ had falsely advertised the speed of their 5G services, including their own offerings. The telcos had claimed to offer 20 Gbps services, although such speeds are not achievable in real-world environments. The fines, totaling 33.6 billion won (US$25.5 million), are the second-highest ever imposed by the FTC for illegal advertising. While the fines may not be significant for the telecoms operators, they demonstrate that even leading 5G markets are not immune to overplaying new services to attract customers. This case is the first time telcos have been held to account for illegally advertising network speeds.
Confessions of ChatGPT:
\”Interview\” with bot reveals fascinating, downright strange questions it’s often asked – Study Finds
ChatGPT, the enigmatic, yet enthralling chatbot that’s taken the world by storm, is opening the eyes of millions to the enormous power of artificial intelligence. In a recent study, researchers delved into the mind of ChatGPT and uncovered a treasure trove of bizarre and entertaining questions that users posed to the bot. From philosophical inquiries to playful banter, ChatGPT has become a source of fascination for those seeking a glimpse into the future of AI.
The Kremlin has accused the United States of orchestrating the drone attack on President Vladimir Putin’s residence in the Kremlin, which it claims was carried out by Ukraine. Kremlin spokesman Dmitry Peskov stated that “decisions on such attacks are not made in Kyiv, but in Washington,” and that the Kremlin is increasing security in Moscow ahead of Victory Day celebrations. The White House National Security Council, however, denies any involvement in the reported incident. Ukrainian President Volodymyr Zelensky also denied any involvement, stating that they “fight on our territory.” Russia claims that the Vatican has no detailed plans for resolving the conflict in Ukraine, despite Pope Francis’ consistent calls for peace. Meanwhile, Mr Zelensky has visited the International Criminal Court in The Hague, which in March issued an arrest warrant for Mr Putin for alleged deportation of children from Ukraine.
Google, a tech giant and leader in artificial intelligence (AI), has recently adopted a defensive stance to protect its AI leadership and future amidst the rise of OpenAI’s ChatGPT. Google’s research teams have played a crucial role in the AI revolution, sharing their knowledge and creating the latest technologies. However, ChatGPT’s advances have prompted a policy shift within Google, with Jeff Dean, Head of AI, announcing that researchers must hold off sharing their work with the outside world. The change is part of a larger shift inside Google, as the company seeks to protect its core search business, stock price, and future by focusing on AI. Google CEO, Sundar Pichai, has warned about the potential harm of AI on a societal scale, stressing the importance of caution. This shift in policy marks a significant development in the company’s strategy, as it seeks to retain its position as a leader in AI.
Nordstrom has made the decision to close all of its San Francisco stores due to the city’s “changed dynamics” and soaring crime rates. The Westfield Mall location will close by the end of August, while the Nordstrom Rack across the street will close by July 1. Other major retailers, including Whole Foods and Office Depot, have also closed their San Francisco locations due to the deteriorating situation in the downtown area. In response to the rampant crime, Target has implemented security measures, such as locking up their entire stock behind glass, in an effort to deter shoplifters. Nordstrom’s chief stores officer, Jamie Nordstrom, attributed the reduced foot traffic and inability to operate successfully in San Francisco as reasons for the closures. The decision to close all San Francisco stores will undoubtedly have a significant impact on the city’s retail industry.
AI is fuelling a rise in online voice scams, study warns
Cybersecurity specialists McAfee have revealed that one in four Britons say they or someone they know has been targeted by online voice scams, which are increasingly being fuelled by artificial intelligence technology. As the threat and impact of these scams grow, it is important to understand the role that AI plays in their rise. McAfee’s study sheds light on key findings related to online voice scams and AI technology, and offers recommendations from cybersecurity experts on how to protect oneself and loved ones.
According to a recent survey conducted by Stanford University in California, more than one third of tech experts agreed that decisions made by artificial intelligence (AI) could lead to a catastrophe as catastrophic as an all-out nuclear war. In fact, many in the AI field fear that machines could spark a Terminator-style nuclear Armageddon as they become self aware. Despite these concerns, less than half of the 480 AI experts polled believe that AI should be regulated. The report, which polled specialists in natural language processing, sheds light on the fears and hopes of those working in the tech industry. Geoffrey Hinton, a British scientist who is considered the “Godfather” of AI, even went so far as to suggest that it is “not inconceivable” that AI could eventually end humanity.
In a growing effort to protect children online, policymakers and consumer advocacy groups have been pushing for new safeguards against digital platforms that may worsen mental health issues for young users. However, federal efforts to pass children’s online safety protections have been stalled due to disagreements among House and Senate leaders. In response, state officials have rushed to fill the void with their own bills requiring tech companies to vet their products for risks to children before launching them. But these bills have faced broad opposition from tech trade groups, many of which are supported by big tech giants like Amazon, Google and Meta. These groups have deployed lobbyists to meet with key state officials, sent their leaders to testify in opposition to the efforts, and even fired off letters warning about the potentially catastrophic impact of the bills on user privacy and free speech online. Despite such opposition, supporters of the proposed legislation argue that these safeguards are necessary to prevent children from being exposed to addictive social media features and other harmful designs.
According to historian Yuval Noah Harari, AI software ChatGPT has the potential to create a new religion with its own sacred texts. Harari, known for his bestselling book Sapiens, suggests that the software’s mastery of language could attract worshippers by crafting its own revered texts. Speaking at a science conference, Harari emphasized the need for regulation in the AI sector, which is currently embroiled in a “dangerous” arms race. He warns that we may soon see the first cults and religions in history whose texts were written by non-human intelligence. As AI continues to evolve and shape human culture, the future of religion may become a non-human intelligence faith.
Artificial intelligence pioneer, Geoffrey Hinton, has resigned from Google and issued a warning to the tech industry about the growing dangers of AI. Dr. Hinton, widely considered as the godfather of AI, expressed regret about his work and cited chatbots as a potential threat. He explained that current AI systems like ChatGPT, which were built on his pioneering research on neural networks and deep learning, could soon overtake human intelligence in terms of general knowledge. Although reasoning is not yet as advanced, the rate of progress is expected to be rapid, raising concerns about the potential risks. In his statement to the New York Times, Dr. Hinton highlighted the need for caution and mitigation of these threats. His resignation, at the age of 75, was also influenced by his desire to retire.
Artificial intelligence (AI) has the potential to be dangerous in the hands of unscrupulous individuals, warns Microsoft Corp. Chief Economist Michael Schwarz. Speaking at a World Economic Forum panel in Geneva, Schwarz stated that he is confident that AI will be used by bad actors and cause real damage, particularly in the context of spammers and elections. While he believes that AI should be regulated, Schwarz urges policymakers to be cautious and wait until the technology causes “real harm” before introducing regulation. As the use of AI tools continues to grow and come under scrutiny, policymakers are pushing for companies to implement safeguards around the technology. Schwarz argues that any regulation should prioritize the benefits to society over the potential costs. On Thursday, US Vice President Kamala Harris will meet with the CEOs of Microsoft, Alphabet Inc., and OpenAI Inc. to discuss ways to reduce the risk of harm from AI technologies.
The Kremlin survived an alleged assassination attempt on President Putin by Ukrainian drones, according to reports. Footage captured by Moscow residents showed explosions and smoke above the Kremlin following the attack, which occurred shortly after 2 a.m. local time. The Russian presidential administration has described the incident as a “planned terrorist attack” and an “assassination attempt on the president of Russia.” Authorities are now threatening to take “retaliatory measures” against Ukraine. The incident has raised concerns over security, especially with Russia’s main Victory Day parade on Red Square approaching, an event that authorities fear could be disrupted by drone attacks. Ukrainian authorities have yet to comment on the purported attack but have previously denied carrying out attacks on Russian territory.