Meta Trains AI with User Posts, Hard to Opt Out”
AI News Meta has recently come under scrutiny for its use of user-generated content to train its AI models. This practice was highlighted through notifications sent to European users about updates to Meta’s privacy policy, reflecting new generative AI features being introduced in the region. According to Meta’s generative AI privacy policy, the company utilizes “information shared on Meta’s products and services,” such as posts, photos, and captions, to improve its AI. However, it explicitly excludes private messages from this data collection.
The company states that these notifications are part of their compliance with local privacy regulations, including the GDPR in Europe. The policy change is set to take effect on June 26, 2024, as indicated in a notice received by a UK-based user, Philip Bloom.
In contrast, users in the U.S. have not received similar notifications, suggesting that similar data practices may already be in place. Since September 2023, Meta has incorporated generative AI features across its platforms. These include tagging the Meta AI chatbot in conversations across Messenger, Instagram, and WhatsApp and interacting with AI personas modeled after celebrities like Snoop Dogg and Kendall Jenner.
Meta has also integrated AI more deeply into its user interface, replacing the traditional search function across its platforms with an AI-driven tool that allows users to discuss specific posts. This feature cannot be disabled, which has caused frustration among users.
For users in the UK and EU, there is a provision to opt-out or “object” to the use of their data for training AI, although the process is complex and user-unfriendly. Users like X user Tantacrul have expressed surprise and dissatisfaction with the intricate steps required to opt-out, suggesting that the design may be intentionally cumbersome to discourage users from opting out. AI News
How to opt out of sharing your data with Meta AI models
AI News To completely disconnect your data from Meta, account deletion is necessary. However, there are several strategies to reduce the amount of data you share with the company.
If you attempt to opt out of Meta’s data usage for AI, you may encounter regional restrictions, with notifications indicating that the option is only available to users in specific locations. According to a guide by PCWorld, there is an alternative pathway through a Meta help center page that allows users to manage personal information utilized by third parties for AI development at Meta.
On this help center page, users are presented with three specific request types concerning third-party data:
- Access, Download, or Correct Information: Users can request to review, download, or amend any personal data from third parties that Meta uses for AI enhancement.
- Delete Information: This option allows users to ask for the deletion of their personal data from third parties that is used in Meta’s AI development.
- Submit Concerns: Users can raise concerns about how their personal data from third parties is being used in responses from AI models, features, or experiences at Meta.
AI News It’s important to note that these forms do not provide a direct method to opt out from data sharing entirely; they focus on managing data provided by third parties. Moreover, submitting a request does not guarantee immediate action, as Meta reviews each submission in accordance with applicable local regulations. Therefore, users in regions like the EU or UK, which are governed by stricter privacy laws, may find it somewhat easier to manage or erase their data.
After selecting the appropriate request type, Meta asks users to provide their country of residence, full name, email address, and detailed justification for their request. This information is necessary to process the inquiry and ensure it complies with local data protection regulations.
The form requires users to provide specific examples of interactions where their personal data was used in a way that prompted their request. This can include detailing any prompts entered that resulted in a response containing their personal data, or uploading a screenshot of such a response. This specificity helps Meta identify the exact issue and evaluate the request, although it does not guarantee that the request will be automatically fulfilled.
How to disable “activity off Meta”
AI News To enhance data privacy on Meta platforms, users can manage their “activity off Meta” settings, which allow visibility into which external sites and apps share information with Meta. From this page, users can actively disconnect from specific sites that have been sharing data with Meta, clear their history of past activity shared, and adjust settings to restrict or prevent future data sharing. This control offers a proactive approach to safeguard personal information by severing ties with third-party sources that contribute data to Meta’s databases.
Disconnecting and clearing previous data activity through Meta’s settings effectively removes information already gathered from third-party sites and apps. To ensure that no future data is shared with third parties, you should navigate to the “Manage future activity” option and select “Disconnect future activity.” This action not only erases the historical data linked to your account but also sets a privacy safeguard to prevent external sources from sharing your data with Meta going forward. This step is crucial for maintaining control over your personal information and enhancing your privacy on Meta platforms.
However, it’s important to note that these settings primarily regulate data sharing with third parties and may not directly affect how Meta utilizes your data for internal purposes, such as training its AI models. The effectiveness of these privacy settings in controlling the internal use of your data remains uncertain. This approach provides a preliminary measure for enhancing your privacy, but it doesn’t fully guarantee that your data won’t be used internally by Meta. We have contacted Meta for more detailed information regarding their internal data use policies and will provide updates as new information becomes available.SOURCE
Will Siri Emulate ChatGPT? Spotlight on Apple’s WWDC.
Apple’s Siri has historically set benchmarks in voice interaction technology but now faces stiff competition from more advanced AI chatbots. Despite pioneering the space with user-friendly voice responses as early as 2011 with the iPhone 4S, Siri today appears to lag in understanding and retaining context during interactions, a key capability of its newer rivals.
For instance, while Siri can accurately fetch information like the dates for upcoming Olympic games, it struggles with follow-up interactions that require understanding of prior exchanges. A simple command to add the event to a calendar reveals limitations in Siri’s conversational and contextual abilities, where it fails to recognize “Olympics” as the event name without additional prompting.
This gap highlights Siri’s need for an upgrade in contextual awareness to seamlessly weave through dialogue threads, a standard feature in newer AI-driven virtual assistants like OpenAI’s ChatGPT and Google’s Gemini. The anticipation is high for the next Worldwide Developers Conference (WWDC) starting on June 10, where Apple is expected to announce substantial enhancements in iOS 18, particularly for Siri.
Such improvements could significantly redefine Siri’s role, not just as a reactive assistant but as a proactive helper capable of understanding and anticipating user needs more effectively. Since its introduction, Siri has inspired the development of sophisticated voice assistants across platforms, including Amazon’s Alexa and Google’s Assistant. These advancements underscore the industry’s shift towards creating more intuitive, conversational technologies, positioning Siri at a crucial juncture to evolve or risk falling behind in the rapidly advancing landscape of AI-powered interactions.
Move over Siri, multimodal assistants are here
AI News Introduced in 2011, Siri was a pioneering force in voice-activated technology, impressing users with its ability to interact in a human-like manner. Initially, Siri set a high standard for voice assistants, but over the years, it has faced increasing competition from more advanced systems like Alexa and Google Assistant. These rivals have not only excelled in understanding and responding to queries but have also successfully integrated into the smart home ecosystem, a domain where Siri has been less dominant.
As of 2024, the landscape for virtual assistants has evolved dramatically, further energized by the latest advancements in generative AI. Companies such as OpenAI, Google, and Microsoft have recently introduced a new generation of virtual assistants that feature multimodal capabilities, enabling them to understand and interact in ways that go beyond simple voice commands to include visual inputs and more complex contextual understanding.
This technological leap presents a significant challenge to Siri, which is now perceived as lagging behind its newer counterparts in both capability and innovation. The threat to Siri’s relevance is so pronounced that NYU professor Scott Galloway has referred to these new AI enhancements as potential “Alexa and Siri killers” on his podcast. This reflects a critical moment for Apple, which must innovate and enhance Siri’s capabilities to maintain its competitiveness in a rapidly advancing field dominated by multifaceted AI-driven technologies.
OpenAI recently showcased its newest AI model, GPT-4o, highlighting the rapid advancements in virtual assistant technologies. During a demonstration in San Francisco, GPT-4o impressed audiences with its human-like interaction abilities, including nuanced tone inflections, sarcastic responses, whispering, and even flirting capabilities. These features evoke comparisons to the AI character voiced by Scarlett Johansson in the film “Her,” where a virtual assistant forms a personal connection with a user. The similarity to Johansson’s voice led to controversy, with the actress expressing concerns that the AI’s voice was uncomfortably close to hers, though OpenAI clarified that this resemblance was unintentional.
Beyond the voice controversy, GPT-4o‘s demo highlighted its robust multimodal capabilities, which allow it to process and respond to various types of inputs, including text, images, spoken language, and video. This means GPT-4o can engage in dynamic conversations about a photograph, interpret and describe video content, and provide insights on digital or real-world articles, enhancing its utility beyond traditional text-based interactions.
Just a day after OpenAI’s presentation, Google unveiled its own advancements in AI with Project Astra. This prototype virtual assistant, demonstrated through a video, showcases how it can integrate with a user’s smartphone camera to interact with their immediate surroundings. In the demonstration, which took place in Google’s London office, Project Astra was able to identify and discuss objects within the environment, such as a speaker in the room, showcasing its potential to assist users in interactive and contextually aware ways.
These developments from OpenAI and Google mark significant milestones in the evolution of AI assistants, moving them closer to seamless integration into everyday tasks and interactions. They reflect a shift towards creating more intuitive and versatile AI tools that can understand and interact with the world in a more human-like manner.
Google’s Project Astra is pioneering a new era of virtual assistants with its ability to perceive and remember environmental details. For example, when queried about the location of misplaced glasses, Astra responded accurately, noting they were “On the corner of the desk next to a red apple.” This capability demonstrates significant advancements in contextual understanding and memory retention in AI systems.
The development of sophisticated virtual assistants extends beyond Google and OpenAI. Elon Musk’s AI venture, xAI, is enhancing its Grok chatbot to include multimodal capabilities, enabling it to process and respond to a variety of sensory data as detailed in recent developer documentation. Similarly, Amazon is not far behind, as it announced plans to equip Alexa, its longstanding virtual assistant, with generative AI technologies. These updates are part of a broader trend where major tech companies are vying to lead in the dynamic field of AI-powered assistants, each adding richer, more interactive features that promise to transform user interactions.
Is Siri transitioning to multimodal capabilities?
AI News Multimodal conversational chatbots are at the forefront of AI assistant technology, hinting at a transformative future for user-device interactions.
Currently, Apple is playing catch-up in the realm of multimodal digital assistants. However, the company has been actively exploring this technology. For instance, in October, Apple detailed its advancements with Ferret, a multimodal AI model capable of understanding on-screen content to assist users with various tasks. According to Apple’s research, Ferret can not only recognize and describe what is displayed on the screen but also facilitate navigation across different apps. This research suggests a potential shift in how users might interact with their iPhones and other Apple devices in the future, leveraging multimodal capabilities to enhance user experience.
Apple is reinforcing its commitment to privacy as a key differentiator for its products and services, planning to position the updated Siri as a more privacy-centric alternative compared to other AI assistants. According to The New York Times, this privacy enhancement involves processing basic Siri requests locally on devices and handling more complex queries via cloud services powered by Apple’s own data centers. Further enhancing its AI capabilities, Apple is reportedly nearing a deal with OpenAI to integrate ChatGPT into the iPhone, suggesting a strategic collaboration rather than direct competition with existing AI chatbots like ChatGPT or Gemini. This collaboration, AI News reported by Bloomberg, indicates that Siri will focus on refining its existing functionalities rather than expanding into new areas like creative writing, maintaining its core utility strengths.
Siri’s Future: Major Updates Expected at Apple’s WWDC
AI News Apple has often adopted a cautious approach with new technologies, preferring to refine and perfect rather than be first to market. This strategy has yielded mixed results. While the iPad was not the first tablet, it quickly became regarded as the best by many, including CNET editors. Conversely, Apple entered the smart speaker market several years after competitors with the HomePod, but it struggled to capture a significant market share compared to the Amazon Echo and Google Home.
On the hardware front, Apple remains the notable absentee in the foldable phone market, with major competitors like Google, Samsung, Honor, Huawei, and others already offering products. Apple’s cautious approach is also evident in its updates to Siri. According to Avi Greengart, lead analyst at Techsponential, Apple tends to update Siri incrementally, focusing on different domains such as sports or entertainment each year.
This year, at Apple’s Worldwide Developers Conference (WWDC), Siri is anticipated to be a central theme, although it is expected to focus more on catching up with its competitors than introducing groundbreaking features. The upcoming iOS 18 is rumored to include enhanced AI capabilities, integrating more deeply into applications like Notes, emojis, photo editing, messages, and emails, as reported by Bloomberg. These updates aim to bring Siri up to par with other leading virtual assistants in the AI space.
This year, Apple is set to enhance Siri’s capabilities significantly. According to Bloomberg’s Mark Gurman, Apple plans to leverage large language models similar to those used in ChatGPT to refine Siri’s ability to process and respond to queries. This development aims to make Siri more context-aware, enabling it to handle complex and nuanced questions with greater accuracy.
Additionally, the upcoming iPhone 16 series is expected to feature increased memory, which will support these advanced Siri functionalities, as reported by The New York Times. This enhancement is part of Apple’s broader effort to transform Siri into a more sophisticated and intelligent digital assistant, aligning it with the latest advancements in AI technology.SOURCE
Tribeca Festival to Feature OpenAI’s AI-Created Films
AI News The 2024 Tribeca Festival is set to showcase a groundbreaking program called Sora Shorts, which features five original short films entirely crafted using OpenAI’s advanced text-to-video AI model, Sora. This marks a significant milestone as it’s the first occasion that films created with Sora will be featured at a major film festival. While AI-generated films have previously been presented at film festivals, the inclusion of Sora Shorts highlights the evolving relationship between AI technologies and the creative industries.
Sora, which remains unavailable to the public, represents a new frontier for filmmakers experienced with AI tools. OpenAI has selectively granted access to this innovative technology to a group of five directors. These directors were chosen under the condition that they adhere to filmmaking guidelines agreed upon last year with major industry guilds, including the Directors Guild of America (DGA), the Writers Guild of America (WGA), and the Screen Actors Guild‐American Federation of Television and Radio Artists (SAG-AFTRA). This collaboration underscores a significant step in integrating AI within established creative and regulatory frameworks, paving the way for future explorations of AI in filmmaking.
Nikyatu Jusu, celebrated for her Sundance Grand Jury Prize-winning film “Nanny,” will join four other filmmakers in premiering their movies created with OpenAI’s Sora at the Tribeca Film Festival. This special screening event is scheduled for June 15 at the Tribeca Film Center Screening Room. The other directors, Bonnie Discepolo, Ellie Foumbi, Reza Sixo Safai, and Michaela Ternasky-Holland, each bring their unique perspectives and styles to this innovative showcase. Following the screening, these filmmakers will engage in a panel discussion, offering insights into their creative processes and experiences working with AI technology to craft their films. This event not only highlights the expanding intersection of film and AI technology but also provides a platform for these directors to discuss the implications and potential of AI in the realm of cinema. AI News
The 2024 Tribeca Festival is set to introduce “Sora Shorts,” a pioneering program showcasing five short films crafted using OpenAI’s text-to-video AI model, Sora. This marks the first time that films developed with Sora will be featured at a major film festival, highlighting a significant milestone in the integration of AI in cinematic production. The films, created by a select group of directors who were granted early access to the technology, adhere to the filmmaking guidelines established by the DGA, WGA, and SAG-AFTRA guilds.
This initiative not only expands the horizons of AI in filmmaking but also underscores Tribeca’s commitment to innovative storytelling. The directors involved include Nikyatu Jusu, known for her Sundance Grand Jury Prize-winning film “Nanny”; Bonnie Discepolo, a versatile filmmaker and actor; Ellie Foumbi, an award-nominated Cameroonian American director; Reza Sixo Safai, an Iranian-American filmmaker noted for his unique narrative voice; and Michaela Ternasky-Holland, an Emmy winner pioneering in the XR and metaverse space.
Each filmmaker will showcase their AI-generated work on June 15 at the Tribeca Film Center Screening Room, followed by a panel discussion. This event is a testament to Tribeca’s belief, as articulated by Tribeca Enterprises co-founder Jane Rosenthal, that storytelling is a powerful tool for change and understanding, whether it comes through traditional cinema or groundbreaking formats like AI-generated films.
OpenAI’s COO, Brad Lightcap, expressed enthusiasm for the role of their AI model, Sora, in empowering filmmakers showcased at the Tribeca Festival. “It’s inspiring to see the creative applications of Sora by these talented filmmakers, and we’re eager to learn from their experiences to enhance Sora for the creative community,” he stated.
Sora distinguishes itself by producing video clips up to 60 seconds long, a significant advance over other models that are limited to 6-8 second clips. This model supports sophisticated cinematic techniques such as varied camera movements, the inclusion of distinct background characters, and dynamic interactions within scenes. Sora also excels in environmental awareness, offering creators the flexibility to reimagine scenes from multiple perspectives or against different backdrops, and to orchestrate events at specific times within the video. However, Sora currently does not support audio for dialogue, and it adheres to strict content guidelines that exclude sexual content and violence. Critics have noted that while Sora offers many technical advancements, it may lack the hallucinatory and surreal qualities that characterize some other AI-driven text-to-video models. AI News
The capabilities of OpenAI’s Sora have stirred considerable debate recently. The model’s ability to generate hyper-realistic images, which surpasses previous text-to-video AI capabilities, has prompted significant reactions. Notably, filmmaker Tyler Perry announced he would pause an $800 million studio expansion due to concerns about the potential job losses associated with advancing AI technology. This decision highlights the broader industry apprehensions about the disruptive potential of AI tools like Sora.
OpenAI recently faced controversy involving Scarlett Johansson, who accused the company’s CEO, Sam Altman, of using a voice in the latest ChatGPT model that closely resembles hers, despite her declining to participate. Altman refuted these claims, clarifying that the voice, referred to as “Sky,” does not intend to mimic Johansson but instead belongs to another actress whose natural speaking voice was used for the model. He further explained that this casting decision was made prior to any discussions with Johansson, known for her role as “Black Widow.” SOURCE
Conclusion
AI News The intersection of AI and creative industries is reaching new heights, as evidenced by the array of developments and initiatives detailed in the article. From Meta’s controversial use of user data to train AI models to Apple’s potential enhancements to Siri, aiming to compete in an AI-driven market, the landscape is evolving rapidly. Additionally, the Tribeca Festival’s inclusion of AI-generated films underscores the significant role AI is beginning to play in filmmaking, pushing the boundaries of traditional content creation.
These developments highlight a broader trend: the integration of AI technology is becoming more pervasive and influential across various sectors. As companies like OpenAI and Google push forward with advanced AI capabilities, the implications for privacy, job security, and creative expression continue to provoke public and professional debate. This dynamic raises critical questions about the future of AI in our society, particularly regarding ethical considerations and the balance between innovation and human impact.
As AI continues to develop, it will be essential for industry leaders, regulators, and the public to engage in meaningful discussions to guide this technology’s integration in a way that respects privacy, fosters creativity, and mitigates potential negative impacts on employment and societal norms. The ongoing evolution of AI promises to reshape our digital and physical worlds, making it imperative to navigate these changes with foresight and responsibility. AI News