The Age of AI
- Francis Val
- 7 days ago
- 5 min read
We live in the age of computer technology. It has grown extremely fast since its debut with the World Wide Web in the mid 1990s, or what is also referred to as the .com boom. We have continued to innovate on its design and pushed it to new heights to where we have the ability to carry around a supercomputer in our pocket. However, with this push for never-ending innovation, we seem to have created our own Frankenstein monster, AI. Companies have begun using new generation AIs to collect information to train said AI using people’s online data from media posts, searches, and pictures that people also distribute online. This has caused much discourse and debate with the general public feeling that their right to privacy is violated to train these big powerhouse tech companies' AIs. Like any basic human right, there should be a level of clear protection for online users when using apps or websites.
The idea of AI being used in so many ways has surprised a lot of people. We’re finally reaching the beginning phase from all those futuristic movies we watched as kids. AI is everywhere and in everything, even in video games now like Call of Duty. In addition, just recently the French video game company Ubisoft announced that they have created an internal AI that they hope will create its own games based on what Ubisoft has made in the past. This has come as a shock to many that now fear being replaced even more as AI slowly creeps into all facets of video development.

But now we need to turn our attention to what is happening beneath the surface: how are these AI able to do all this with such little difficulty? As mentioned in the intro, companies are shelling out millions, and in a few cases billions, of dollars to develop AI applications and train them by using people’s online profile to act as a base for its thinking. It’s become a public secret at this point that companies follow you online like a hunter looking for a deer; they see everywhere you go and save that information(Miller). Is that legal? Yes actually. So that being said, people are seeing the ramifications of not caring what these companies see online, as they are pumping all of that into their AI. This has made certain AI tools/programs so good at what they do because they are basing what they make off millions of people’s tendencies and smashing it all into one.
In many cases companies will hide the option to share your date or make it not noticeable once you agree to terms of service.
Don’t believe me? Here is an example of AI being implemented: Smart Compose, a Google run AI that runs a lot like Grammarly, which is also AI. Smart Compose is designed to make suggestions to email writers and help them write emails according to their style after taking inventory of all the user’s previous emails. This shouldn’t be allowed. While this feature is a toggled option, it shouldn’t be an option for a massive company like Google to to read and use your PERSONAL emails. That is private, no ifs, ands, or buts. In an article the by Johana Bhuiyan, the author states that the position of many online users is this: “Left to their own devices, more than 300,000 Instagram users have posted a message on their stories in recent days stating they do not give Meta permission to use any of their personal information to inform their AI”(Bhuiyan par.3).
I am mostly on the side of the users, but playing devil’s advocate for the companies for a moment, if someone on a platform makes a “public post” then it should be fair game to both the AI and the company to use said post. If they put them on their private story/feed then yes, users have more of a leg to stand on. I feel that users for Facebook, Twitter and Instagram know what they are signing up for. All three companies have a long standing history of using people’s personal data(Reuters).Yet thousands every year still flock to the platforms. I am a somewhat exception to this, though. I rarely use Twitter, and Facebook is just a means for me to sell things on its Marketplace.
However people should automatically have a right to privacy as it is a scary thought to think that any interaction you may have with your platform’s AI is already influenced by your past interactions/posts on the platform. When someone makes an account on one of these platforms, I think it should be like how a bank operates. When someone puts certain items in a safety deposit for safekeeping, one should not have to worry about said bank then opening it to learn things about you. People find AI’s ability to use posts quite troublesome because there are many posts people make online that years later don’t reflect their beliefs or it seems as a reminder of an embarrassing moment. Why should AI be allowed to use these posts?

Furthermore, on the matter of online rights, what are they … exactly? Well nobody knows for sure, even though the Web has been around since the 90’s. Similarly, free speech took decades to iron out with many court cases hammering out the kinks. Web rights are still very much in their infant stage. As Caroline Meinhardt, Stanford’s HAI’s policy research manager, points out, “If we want to give people more control over their data in a context where huge amounts of data are being generated and collected, it’s clear to me that doubling down on individual rights isn't sufficient”(qtd. in Miller par.16). This brings credence to the overall problem we are facing with tech companies, as we are skipping ahead to wanting personal private rights before we have obtained group rights. Instead of crawling, walking, then running, we are trying to run before our leg bones can even support us. This debate has been what has allowed companies to get ahead with AI and use it to access people’s data. The obvious way then to counter-attack is to regroup on each platform and take stock on what is currently the standard with AI and fight to have it changed if and when needed. This would be a huge undertaking and the idea that people can have a unified vision on these issues is low, very low.
Writing this will not change the trajectory of AI, but I merely wish to speak my mind and present my arguments to the court of the world. AI is quickly becoming a powerful player at the poker table, so it may become what we fear and robots will take over the world, or maybe not. There is no question that certain privacy should be maintained even in the world of computer technology. Action is needed to curb tech companies and restrict their invasive AI mining of people’s online information.
Bhuiyan, Johana. “Companies Building AI-Powered Tech Are Using Your Posts. Here’s How to Opt Out.” The Guardian, The Guardian, Nov. 2024, https://doi.org/10_1267/master/2110. Accessed Nov. 2025.
Miller, Katharine. “Privacy in an AI Era: How Do We Protect Our Personal Information?” Hai.stanford.edu, Stanford University, 18 Mar. 2024, hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information.
Yin-Poole, Wesley. “Activision Finally Admits It Uses Generative AI for Some Call of Duty: Black Ops 6 Assets after Backlash Following ‘AI Slop’ Zombie Santa Loading Screen - IGN.” IGN, 25 Feb. 2025, www.ign.com/articles/activision-finally-admits-it-uses-generative-ai-for-some-call-of-duty-black-ops-6-assets-after-backlash-following-ai-slop-zombie-santa-loading-screen.
r/gaming. “Reddit - the Heart of the Internet.” Reddit.com, 2024, www.reddit.com/r/gaming/comments/1owxnoi/call_of_duty_black_ops_7_is_littered_with_ai_art/. Accessed 4 Nov. 2025.
Pulver, Andrew. “Douglas Rain, Voice of HAL in 2001: A Space Odyssey, Dies Aged 90.” The Guardian, The Guardian, 12 Nov. 2018, www.theguardian.com/film/2018/nov/12/douglas-rain-dies-2001-a-space-odyssey-hal-9000. Accessed Nov. 2025.
silver fox. “The Decline in Literacy & Rise in Ai.” YouTube, 30 June 2025, www.youtube.com/watch?v=5FuxNr90T8o.
Maas, Jennifer. “Ubisoft Sets Generative-AI Game ‘Teammates’ from ‘Neo NPC’ Developers.” Variety, 21 Nov. 2025, variety.com/2025/gaming/news/ubisoft-generative-ai-game-teammates-neo-npc-developers-1236588038/. Accessed Nov. 2025.





Comments