Before December 2023, OpenAI was the leader in artificial intelligence with its groundbreaking GPT-4 model. Out of the blue, a new AI company emerged from China, with a new model called DeepSeek LLM. Later in January 2025, they came out with DeepSeek R1, a model on par with that which took OpenAI years to perfect.
DeepSeek is an AI development company based in Hangzhou, China, and entrepreneur Liang Wenfeng founded it in May 2023. Although he found limited success with his initial ventures into artificial intelligence, he eventually had breakthrough with a startup called High Flyer. The company specializes in leveraging AI and mathematical analysis for investment strategies, reaching a valuation of 10 billion yuan ($1.3 billion) within three years.
After debuting High Flyer, Wenfeng founded DeepSeek in November 2023. Over the next year, he, from NVIDIA, acquired thousands of graphics processing units that are essential for model development, to further accelerate the production of AI models. The models created by DeepSeek became some of the best in China but did not gain much traction in America until the release of DeepSeek R1 in January 2025.
DeepSeek R1 went on to perform on par with or better than OpenAI’s 4o in almost all tests — for a fraction of the price. For instance, one million output tokens, where one output token corresponds to one unit of generated information, cost $60 to operate with ChatGPT o1, but only $2.19 with DeepSeek R1.
However, by using DeepSeek, users’ information can be exposed for “as long as necessary,” according to DeepSeek’s privacy policy terms. Data such as one’s email address, phone number and date of birth — all entered when creating an account — any user input including text and audio, as well as chat histories, and even so-called “technical information” — ranging from users’ phone model and operating system to their IP address — are shared with Deepseek. This information can be kept by DeepSeek indefinitely to allegedly “enhance its safety, security and stability.” From there, DeepSeek is able to share this information with other parties, such as service providers, advertising partners and its corporate group.
“Regardless of people’s thoughts, hopefully the government will impose some sort of limitation on what AI can do,” FHS Assistant Principal Andy Walczak said. “Other countries have their own thoughts and opinions on AI, but to our benefit, our country could create the [safest] version of AI.”
DeepSeek experienced a severe Distributed Denial-of-Service (DDoS) cyberattack on Jan. 30, 2025. A DDoS attack works by flooding an internet server with large amounts of traffic in order to overload services and prevent others from using it. DeepSeek faced an attack of 3.2 terabits per second, equivalent to transmitting over 100 4K movies per second. This overwhelmed the DeepSeek servers, causing them to go offline and halt their services.
“All new technologies have unintended consequences that are difficult to foresee,” AP Computer Science and Pre-Calculus Honors teacher David Dobervich said. “If, five years from now, it were possible to have AI do all my grading, I probably wouldn’t want them to since I learn about my students’ thinking and get new ideas for teaching as I read student work. Having AI [models] grade might make me more efficient, but I’d be missing out on all [the insight into my students/opportunities for improvement]and would probably be a worse teacher.”
AI has emerged as a prevalent entity within our society and affects many day-to-day jobs. It has also become a highly competitive field, with many of the U.S. models competing against each other, and other countries fighting to join the competition as well.
Another competitor in the AI race is Grok. Grok is an AI model developed by xAI, a company founded by Elon Musk. Grok carries Musk’s style of simplification, optimization and automation. It boasts a “sense of humor” and is built on top of X, having direct access to X posts as a primary source of information. xAI was founded in March 2023, and within just eight months already had a massive data center named Colossus in Memphis, TN. This data center is equipped with over 200,000 graphics processing units, all of which were acquired within eight months of xAI’s founding.
With each new AI model, and with every new technological breakthrough in artificial intelligence, human reliance on AI shifts more towards overreliance. AI overreliance can be characterized by humans accepting suggestions and decisions made by AI regardless of accuracy. This has already had adverse effects on many fields, including education and medicine.
“In theory, a human collaborating with an AI system should make better decisions than either working alone,” Helena Vasconcelos, a Stanford undergraduate student majoring in symbolic systems, said. “But humans often accept an AI system’s recommended decision even when it is wrong – a conundrum called AI overreliance. This means people could be making erroneous decisions in important real-world contexts such as medical diagnosis or setting bail.”
“Before smart phones people had to make and keep social commitments because there was no way to notify someone of changes of plans,” Dobervich said. “I’d say there’s been a shift away from that — it seems reasonable to say that habitually offloading mental work to AI’s will make humans less adept at those mental tasks.”
AI’s ability to undertake many cognitive tasks such as data analysis, decision-making and problem-solving, has greatly reduced the need for human action. While this can mean increased efficiency, it also reduces human participation in deep thought and complex problem-solving. As AI continues to become more integrated into our daily lives, there is a rising risk of overreliance among people.
“If [I] turn to AI every time I feel uninspired or stuck on something, is that, in the long run, [allowing] me to be more productive and creative, or is it making me more dependent and helpless?” Dobervich said.