jA widely-viewed video showcasing the purported real-time interaction capabilities of Google’s artificial intelligence model, Gemini, has been revealed to have been manipulated for demonstration purposes. The video, garnering 1.6 million views on YouTube, depicts seamless back-and-forth interactions between the AI and a human, responding to spoken-word prompts and visual stimuli.
Google, in the video’s description, acknowledged that the presentation was not entirely genuine, as it had accelerated response times for the demonstration. Subsequently, in a blog post released concurrently with the video, Google disclosed the actual process behind its creation.
Contrary to the apparent spontaneity of the AI’s responses to voice and video prompts, it was disclosed that the AI was, in reality, prompted using static image frames from the footage and text-based prompts. Google confirmed this revelation to the BBC, clarifying that the video was generated by presenting the AI with questions through text prompts and static images.
A Google spokesperson stated, “Our Hands on with Gemini demo video shows real prompts and outputs from Gemini. We made it to showcase the range of Gemini’s capabilities and to inspire developers.”
In specific instances within the video, such as identifying materials of objects and tracking a hidden ball in a cups and balls magic routine, the AI’s seemingly impressive responses were based on still images rather than live interactions. Google explained that the AI was provided with static images representing the challenges presented in the video, allowing it to showcase Gemini’s capabilities across
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: