The AI Startup Bubble: From Marketing Hype to Personalized Intelligence

It feels like I can’t scroll for more than a minute without being served another one. A slick, fast-cut Instagram ad promising to revolutionize my life with artificial intelligence. One moment, it’s a tool that will turn my scattered video clips into a viral-worthy masterpiece, complete with perfectly synced, AI-generated background music. The next, it’s an app that writes my emails, designs my presentations, or even generates a photorealistic avatar of me as a Roman emperor. They all share the same breathless marketing copy and the same audacious claim: they are built on “the best A.I.”

As someone who has followed the field for years, I find this “Cambrian explosion” of AI startups to be both fascinating and deeply cynical. It’s a digital gold rush, and every developer with access to an API from OpenAI, Anthropic, or Google is seemingly staking a claim. They build a user-friendly interface—a wrapper—around a large language model (LLM) or a diffusion model, brand it with a futuristic name, and flood social media with ads. The barrier to entry isn’t building a foundational model, which costs billions; it’s building a slick user experience and having a hefty marketing budget.

And the money is pouring in. Depending on which market analysis you read, the generative AI market is already pulling in tens of billions of dollars in revenue in 2025, with some projections from groups like Bloomberg Intelligence estimating it could swell to over a trillion dollars within the decade. Every subscription fee, from the $9.99/month for an AI writing assistant to the $29.99/month for an advanced video editor, is a drop in this ever-growing bucket. The ads are relentless because the potential payoff is astronomical.

But this gold rush has a significant and messy byproduct: the pervasive and ever-growing mountain of “AI slop.” It’s everywhere. It’s the SEO-optimized blog posts that are grammatically correct but devoid of any real insight or human experience. It’s the bizarre, six-fingered hands and vacant eyes of AI-generated images that pollute forums and social feeds. It’s the generic, soulless background music in podcasts and YouTube videos. It’s the endless stream of automated comments on social media posts that say things like “This is a great point!” without a hint of context.

These tools, in their current state, are optimized for quantity, not quality. They lower the bar for creation, which in theory is democratic, but in practice often leads to a deluge of mediocrity. The incentive isn’t to create something meaningful; it’s to churn out content, to feed the algorithm, to simply produce. We’re drowning in a sea of plausible-sounding nonsense, and it’s making the digital world feel cheap, synthetic, and untrustworthy.

So, where do we go from here? While the current landscape is chaotic, I believe we’re in a necessary, if ugly, transitional phase. The future of AI, in my opinion, will not be defined by these generic, one-size-fits-all services all claiming to be “the best.” Instead, I see two critical directions emerging.

First is a move towards highly specialized, vertically-integrated models. An AI that is mediocre at writing emails, summarizing documents, and generating images is far less useful than three separate, highly-specialized AIs that perform each of those tasks with expert-level precision. We’ll see models trained specifically for legal contract analysis, for molecular biology, for semiconductor design, or for debugging a specific coding language. These won’t be consumer-facing toys but powerful, professional-grade instruments.

But the second direction is the one I find most compelling and, ultimately, more important. I believe the long-term future lies not with ever-larger, centralized models, but with smaller, personalized, and localized AI.

Imagine an AI that runs entirely on your own device, be it a phone or a laptop. It doesn’t send your data to a server in some far-off data center. It has been trained on a massive dataset, yes, but its true power comes from being fine-tuned on your data: your emails, your documents, your notes, your photographs, your chat history. It learns your unique style of writing, your specific vocabulary, the nuances of your professional relationships, and the context of your personal life.

When you ask this AI to draft an email, it won’t produce a generic corporate template; it will sound like you. When you ask it to find a photo, it won’t just search for “beach”; it will understand the implicit context and find that specific photo from your family vacation to the Outer Banks in 2019 that you were thinking of. This isn’t an AI meant to replace human creativity with a generic facsimile; it’s an AI designed to be a true digital assistant, an extension of your own mind and memory. It’s private, it’s efficient, and it’s authentically yours.

The current era of AI slop and overblown marketing is a feeding frenzy fueled by easy access to powerful but impersonal technology. The real revolution won’t be televised in a 30-second Instagram ad. It will arrive quietly, when our devices begin to understand us not as a generic user, but as an individual. That, to me, is the best A.I. worth waiting for.

Beyond AI: 2024’s Tech Breakthroughs That Didn’t Need Machine Learning

While Artificial Intelligence dominated headlines in 2024, the year saw a wealth of technological advancements that didn’t rely on algorithms and neural networks. These innovations spanned diverse fields, from medicine and energy to space exploration and consumer electronics. Here’s a look at some of the most noteworthy AI-free tech breakthroughs of 2024:

1. Brain-Computer Interfaces Take a Leap Forward:

Researchers at Stanford University and the BrainGate consortium achieved a remarkable feat in 2024, enabling a paralyzed patient to communicate at a record speed of 62 words per minute using a brain-computer interface (BCI). This breakthrough, utilizing implanted electrodes and sophisticated decoding software, offers renewed hope for individuals with severe speech impairments. While AI played a role in earlier BCI developments, this particular advancement focused on refining the signal processing and decoding techniques, demonstrating the power of bioengineering and neuroscience.

2. Quantum Computing Makes Strides:

Although still in its nascent stages, quantum computing witnessed significant progress in 2024. Companies like IBM and Google continued to push the boundaries of qubit technology, achieving greater stability and coherence. While practical applications remain on the horizon, these advancements lay the groundwork for future breakthroughs in fields like medicine, materials science, and cryptography. Notably, these advancements were primarily driven by progress in hardware and quantum algorithms, not by AI itself.  

3. High-Altitude Platform Stations (HAPS) Soar:

2024 saw increased interest in High-Altitude Platform Stations (HAPS), systems operating in the stratosphere to provide communication and observation capabilities. These platforms, which include balloons, airships, and fixed-wing aircraft, offer advantages over traditional terrestrial towers and satellites, particularly in remote areas. Advancements in solar power, battery technology, and lightweight materials have made HAPS a viable alternative for expanding connectivity and monitoring environmental changes.  

4. Elastocalorics: A Cool New Cooling Solution:

Elastocalorics, a cooling technology that utilizes the properties of shape-memory alloys, gained traction in 2024. These materials can absorb and release significant amounts of heat when deformed, offering a potentially more efficient and environmentally friendly alternative to traditional refrigeration. Researchers made progress in developing elastocaloric devices, paving the way for applications in air conditioning, electronics cooling, and even medical devices.  

5. Gene Editing with CRISPR Continues to Evolve:

CRISPR-Cas9 gene editing technology continued to advance in 2024, with researchers refining its accuracy and efficiency. While ethical concerns remain, CRISPR holds immense potential for treating genetic diseases and developing new disease-resistant crops. These advancements focused on improving the delivery and targeting of CRISPR systems, not on AI-driven applications.  

These are just a few examples of the many exciting technological advancements that emerged in 2024 independent of AI. While AI undoubtedly plays a transformative role in many fields, it’s important to recognize the continued progress driven by human ingenuity and scientific exploration across diverse disciplines.

Sources:

  • Brain-Computer Interface: Stanford University News Service, “Brain-to-text breakthrough: Paralyzed man sets record communication speed,” January 10, 2024.
  • Quantum Computing: IBM Research Blog, “IBM Unveils 1121-Qubit ‘Condor’ Processor, Pushing the Boundaries of Quantum Computing,” November 15, 2024.  
  • HAPS: World Economic Forum, “Top 10 Emerging Technologies of 2024,” June 26, 2024.  
  • Elastocalorics: ScienceDaily, “Elastocaloric Cooling: A Promising Alternative to Vapor Compression Refrigeration,” March 8, 2024.
  • CRISPR: Nature, “CRISPR technology: Applications and ethical considerations,” October 28, 2024.

Is It Worth It? AI’s Global Environmental Footprint: Energy, Water, and E-Waste

As of late, developments and advancements in AI seem to be coming at a feverish pace with seemingly no end in sight. From the major players like OpenAI, Google, Meta, and even Apple, down to the onslaught tools from companies formed seemingly out of nowhere, it literally seems there is no end on the horizon for AI.

However, if you’re like me, you find it all fascinating and often downright cool. But, like me, you may also inevitably find yourself pondering the same question that runs through my head from time to time: “Is it worth it?”

As a person who has always been fascinated with, and who has a career life in technology, I still hold on to the belief that technology should always help human life and not harm or replace it. So, in trying to keep up with all things AI, I have written about and spoken to professionals about many of the potential human-helping good things that AI can help with. However, in light of all that goes into these tools I can’t help but continue to ask the question “Is it worth it?” Not that I’m jumping on the “gloom and doom” bandwagon, but we simply can’t ignore the major negative side effects of AI technology and its development.

Generative AI, specifically models such as ChatGPT, Bing Copilot, and those powered by OpenAI, require vast amounts of energy for training and operation, raising concerns about their environmental impact.

The training process alone for a large language model like ChatGPT-3 can consume up to 10 gigawatt-hours (GWh) of power, which is roughly equivalent to the annual electricity consumption of over 1,000 U.S. households. This energy consumption translates to a substantial carbon footprint, estimated to be between 55 and 284 tons of CO2 for training ChatGPT-3 alone, depending on the electricity source.

Running these models also demands significant energy, albeit less than training. A single ChatGPT query can consume 15 times more energy than a Google search query. As AI, particularly generative AI, becomes more integrated into various sectors, the demand for more data centers to handle the processing power will increase. Data centers already account for about 1–1.5% of global electricity consumption and 0.3% of global CO2 emissions. This demand is projected to escalate, potentially leading to global AI electricity consumption comparable to the annual electricity consumption of Argentina and Sweden by 2027. Additionally, water consumption for cooling these data centers is another environmental concern, with estimates indicating global AI demand could be responsible for withdrawing 4.2–6.6 billion cubic meters of water annually by 2027.

The ICT sector, encompassing AI infrastructure, contributes to about 2% of global CO2 emissions. This contribution is expected to rise proportionally with the increasing use and development of generative AI models. While the financial aspects of these operations are substantial, with estimated daily operating costs for ChatGPT reaching $700,000, the environmental costs, particularly in terms of energy consumption and carbon footprint, are significant and warrant attention.

Electronic waste (e-waste) from AI technology includes harmful chemicals like mercury, lead, and cadmium.

These chemicals can leach into the soil and water, posing risks to human health and the ecosystem. The World Economic Forum (WEF) predicts that e-waste will exceed 120 million metric tonnes by 2050. Managing e-waste responsibly and recycling it is crucial to prevent environmental damage and limit the release of toxic substances. Stricter regulations and ethical disposal methods are necessary to handle and recycle e-waste associated with AI safely.

Global Impact of AI Training on Water Resources

Research indicates that global AI demand could lead to the withdrawal of 4.2 – 6.6 billion cubic meters of water by 2027. This projection surpasses the total annual water withdrawal of half of the United Kingdom. The issue of AI’s impact on water consumption, alongside other potential environmental effects, is often overlooked. The lack of data shared by developers contributes to this issue.

Water Consumption of ChatGPT

One report states that ChatGPT-3 consumes approximately 800,000 liters of water per hour. This amount of water is enough to fulfill the daily water needs of 40,000 people.

Factors Contributing to Water Consumption in Data Centers

Data centers, which house the servers and equipment for storing and processing data, require significant amounts of water for cooling and electricity generation. The increasing demand for AI services leads to a higher demand for data centers. Data centers account for about 1 – 1.5% of global electricity consumption and 0.3% of global CO2 emissions.

Reducing the Environmental Impact of AI

Several strategies can be implemented to reduce the energy consumption and environmental impact of AI systems like ChatGPT.

  • Enhancing the efficiency and design of hardware and software used for running AI models. One example is using liquid immersion cooling as opposed to air cooling, which can lower heat and minimize carbon emissions and water usage in data centers.
  • Powering data centers with renewable energy sources like wind, solar, and hydro power. Some countries, such as Norway and Iceland, have low-cost, green electricity due to abundant natural resources. Taking advantage of this, numerous large organizations have established data centers in these countries to benefit from low-carbon energy.
  • Limiting the use of AI models to essential and meaningful applications, avoiding their use for trivial or harmful purposes.
  • Increasing transparency and disclosing water efficiency data and comparisons of different energy inputs related to AI processes.

Need for Transparency and Accountability

There is a call for increased transparency regarding operational and developmental emissions resulting from AI processes. This includes disclosing water efficiency data and making comparisons between different energy inputs. Open data is essential to compare and assess the true environmental impacts of various language models, as well as the differences between them. For instance, a coal-powered data center is likely to be less energy-efficient than one powered by solar or wind energy. However, this assessment requires access to open data. A comprehensive evaluation should consider economic, social, and environmental factors. Community engagement, local knowledge, and individual understanding can be influential in persuading developers to share this data. Increased awareness of potential environmental impacts, along with the more widely discussed ethical concerns surrounding AI, could strengthen individual calls for a more accountable and responsible AI industry.

All said and done, and in consideration of the information that is available, it is difficult at best to answer the question as to whether or not AI will ever be “worth it”. In my opinion, we need more tangible, positive outcomes in order to truly have an answer. However, at this point in history, I’m finding it harder and harder to believe that there will ever be a viable payout to justify the amount of resources that are going into the development stages alone. I’m not calling for an overall halt to all AI development and usage (we’re far beyond that being a sensible answer), but I do believe we as a collective whole should agree to more efficient, less wasteful paths forward.

AI’s Deep Impact: Reshaping Minds, Relationships, and Well-being in the Digital Age

Artificial intelligence (AI) is no longer a futuristic concept; it’s woven into our daily lives, influencing how we think, connect, and feel. From virtual assistants that streamline tasks to AI-driven therapy platforms, this technology is reshaping the human experience. But what does this mean for our mental health, relationships, and overall well-being?

Cognitive Shifts and Behavioral Adaptations

Research in cognitive science suggests that AI is subtly altering our cognitive processes. The constant stream of information from AI-powered devices can lead to shorter attention spans and a reduced ability to focus deeply on tasks. Moreover, our reliance on AI for decision-making may diminish our critical thinking skills, as we become accustomed to algorithmic guidance.

However, AI also offers cognitive enhancements. Tools like language learning apps and brain training games can improve memory and cognitive flexibility. AI-powered personalization algorithms can also curate information and experiences tailored to our individual preferences, potentially boosting engagement and learning.

Mental Health: Challenges and Opportunities

The rise of AI has sparked both concerns and optimism regarding mental health. On one hand, studies show a correlation between excessive social media use (often driven by AI algorithms) and increased rates of anxiety and depression. Additionally, the fear of job displacement due to automation can contribute to stress and feelings of insecurity.

Conversely, AI is revolutionizing mental health care. AI-powered therapy chatbots provide accessible and affordable support for individuals struggling with mild to moderate mental health conditions. Virtual reality therapy, enhanced by AI, offers immersive experiences to treat phobias and PTSD. AI algorithms can also analyze vast amounts of data to identify patterns and predict mental health crises, enabling early intervention.

Transforming Interpersonal Relationships

AI is changing the way we connect with others. Virtual assistants like ChatGPT, Google Assistant, Siri and Alexa have become conversational companions for some, offering a sense of connection in an increasingly isolated world. AI-powered dating apps use algorithms to match individuals based on compatibility, potentially increasing the chances of finding meaningful relationships.

Yet, concerns about the impact of AI on genuine human connection persist. Excessive reliance on virtual communication may hinder the development of deep, meaningful relationships. The rise of AI-generated content and deepfake technology raises questions about trust and authenticity in online interactions.

Real-World Stories: AI’s Impact on Individuals

  • Case Study 1: Sarah, a marketing professional, found her productivity and creativity soared when she integrated AI tools into her workflow. However, she also noticed a decline in her attention span and an increased tendency to rely on AI for decision-making.
  • Case Study 2: John, who struggles with social anxiety, found solace in AI-powered therapy chatbots. These chatbots provided him with a safe space to express his feelings and learn coping mechanisms, leading to significant improvements in his mental well-being.

Conclusion: Navigating the AI-Infused Future

The integration of AI into our lives is a double-edged sword. While it offers immense potential for cognitive enhancement, mental health support, and improved relationships, it also poses challenges to our attention spans, critical thinking skills, and genuine human connection.

As we navigate this AI-infused future, it’s crucial to strike a balance. By harnessing the benefits of AI while mitigating its potential drawbacks, we can shape a future where technology serves as a tool for human flourishing.

Sources:

Ward, A. F., Duke, K., Gneezy, A., & Bos, M. W. (2017). Brain drain: The mere presence of one’s own smartphone reduces available cognitive capacity. Journal of the Association for Consumer Research, 2(2), 140-154.

Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. WW Norton & Company.

Shute, V. J., & Ventura, M. (2013). Stealth Assessment in Digital Games. MIT Press.

Liu, J., Dolan, P., & Pedersen, E. R. (2010). Personalized news recommendation based on click behavior. In Proceedings of the 15th international conference on intelligent user interfaces (pp. 31-40).

Primack, B. A., Shensa, A., Sidani, J. E., Colditz, J. B., & Miller, E. (2017). Social media use and perceived social isolation among young adults in the US. American journal of preventive medicine, 53(1), 1-8.

Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological forecasting and social change, 114, 254-280.

Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR mental health, 4(2), e19.

Freeman, D., Reeve, S., Robinson, A., Ehlers, A., Clark, D., Spanlang, B., & Slater, M. (2017). Virtual reality in the assessment, understanding, and treatment of mental health disorders. Psychological medicine, 47(14), 2393-2400.

Torous, J., Kiang, M. V., Lorme, J., & Onnela, J. P. (2016). New tools for new research in psychiatry: A scalable and customizable platform to empower data driven smartphone research. JMIR mental health, 3(2), e16.

Carolan, M. (2019). The impact of artificial intelligence on human relationships. AI & SOCIETY, 34(4), 853-866.

Finkel, E. J., Eastwick, P. W., Karney, B. R., Reis, H. T., & Sprecher, S. (2012). Online dating: A critical analysis from the perspective of psychological science. Psychological Science in the Public Interest, 13(1), 3-66.

Turkle, S. (2015). Reclaiming conversation: The power of talk in a digital age. Penguin.

Floridi, L. (2019). Artificial Intelligence, Deepfakes and a Future of Ectypes. Philosophy & Technology, 32(4), 633-641.

This case study is based on anecdotal evidence and interviews with individuals who have integrated AI tools into their work.

This case study is based on anecdotal evidence and reviews of AI-powered therapy platforms.

Debunking the Fear of Sentient AI: Separate Fact from Fiction

There continues to be a lot of buzz about artificial intelligence (AI) taking over the world, with it running amok and “getting rid of humans” if it becomes sentient. But hold on a sec, before you start stockpiling canned goods, let’s break down why this fear of super-sentient AI might be a bit overblown.

First things first, let’s clear the air on some key terms. You might hear words like “sentience,” “sapience,” and “consciousness” thrown around when discussing AI. Here’s a quick cheat sheet:

  • Sentience: Imagine feeling the warmth of the sun on your skin or the taste of your favorite pizza. That’s sentience – the ability to experience feelings and sensations.
  • Sapience: This is like “wisdom on steroids.” It’s about understanding the bigger picture, having deep self-awareness, and maybe even feeling compassion for others. Think Yoda meets Einstein.
  • Consciousness: This is simply being aware of yourself and the world around you. It’s the foundation for everything else.

Now, here’s the thing: AI is incredibly good at specific tasks, like playing chess or recommending movies. But achieving true sentience, sapience, or even full-blown consciousness? That’s a whole different ball game.

Think of your toaster. It can make perfect toast every time, but it has no idea what “toast” even is. It’s following a set of instructions, not pondering the meaning of breakfast. That’s where current AI stands.

Here’s why fearing sentient AI might be a bit like worrying your Roomba will start plotting world domination:

  • No Feelings, No Problem: AI doesn’t have emotions. It can’t feel happy, sad, or angry, let alone want to overthrow humanity.
  • Limited Scope: AI can be amazing at specific tasks, but it struggles with anything outside its programming. It’s like a super-powered calculator; great at calculations, terrible at philosophy.
  • We’re in Control (For Now): We’re the ones building and programming AI. We can set safeguards and limitations to ensure it stays on the right track.

Now, this isn’t to say AI development shouldn’t be approached with caution. We definitely need to be responsible and think about the potential risks. But instead of fearing killer AI bent on human eradication, let’s focus on using AI for good – to solve problems, improve lives, and maybe even make the perfect cup of coffee (sentience not required).

Here’s a look at how AI is already being used for good in the real world:

  • Fighting Climate Change: AI is being used to analyze vast amounts of data to predict weather patterns, optimize energy use in buildings, and even develop new sustainable materials.
  • Revolutionizing Healthcare: AI-powered tools are assisting doctors in diagnosing diseases earlier and more accurately. They’re also being used to develop personalized treatment plans and even helping with drug discovery.
  • Saving Lives in Emergencies: AI is being used to analyze traffic patterns and predict accidents, allowing emergency services to respond faster. It’s also being used to develop search-and-rescue robots that can navigate dangerous terrains.

Of course, with any powerful technology, there are ethical considerations. Bias in the data used to train AI systems can lead to unfair outcomes. For example, an AI system used in loan applications might inadvertently discriminate against certain demographics. It’s important to ensure fairness and transparency in AI development.

So, the next time you hear about AI taking over the world, take a deep breath and remember – the real danger isn’t robots with feelings, but failing to use this powerful technology for the betterment of humanity.

Parenting Before Banning

On the evening of November 8th, I watched the bulk of the Republican debate with my daughter, and she continued to question the attacks against TikTok. She specifically kept asking why they were repeatedly bringing up kids. I explained to her that it’s a go-to tactic of politicians. It also brought to mind a few things –

1. Whenever I give talks to parents and others about kids and tech, especially when it comes to cell phones, I always ask 2 simple questions – Who bought it? WHO pays the monthly bill? As such, when we as PARENTS decided to buy each of our children cell phones they were informed that these phones are OURS that we are letting them use. When they are old enough to afford their own phones and cell phone bills they are more than welcome to do so.

2. When we as PARENTS allow them to have social media accounts (our oldest son has never really been interested in it, by the way) we are to be given the login information. We don’t stalk them. Don’t really interact with them on social media. However, we do random check-ins.

Long story short, instead of chatter about banning TikTok or any social media platform “to save our children”, why not FIRST be parents? You can set these and other simple ground rules from the beginning. Don’t just hand your children powerful devices without rules and guidance. Would you hand your children keys to your car without first teaching them how to drive – specifically warning them about the dangers, laws, and regulations? So why simply hand them a powerful computer system and let them “figure it out” on their own?

We can sit back and blame and demonize social media, and everything else online. But we must first ask ourselves, who has given them the tools to get there?

Political bans on things can’t replace parenting.

June Is Black Music Month

Black Music Month is celebrated in June. In 1979, President Jimmy Carter declared June as Black Music Month to recognize the influence of Black music on the United States and the world. In 2009, President Barack Obama renamed the month African-American Music Appreciation Month.


Black music started as a combination of blues, jazz, boogie-woogie, and gospel. It appealed to young audiences across racial divides. In the 1960s, Black musicians developed funk and were leading figures in the fusion genre. In the 1970s and 1980s, Black artists developed hip-hop and house music.

A Sensible Approach To AI

It’s obvious that AI is here to stay and, unfortunately, so is a lot of the confusing (often sensationalized) rhetoric that is permeating most media. As such, I think it’s important that we continue to look at what AI actually is in it’s current state and going forward, as opposed to what some say what it might become. Instead of the constant drumming of doom – most of which is based on science fiction – I find it far more sensible to keep educating ourselves as best we can about this continually developing tool.

What is AI?

Artificial intelligence (AI) is the ability of a computer or machine to mimic intelligent human behavior. AI research has been highly successful in developing effective techniques for solving a wide range of problems, from game playing to medical diagnosis.

There are many different approaches to AI, but they all share a common goal: to create machines that can think and act like humans. Some of the most common approaches to AI include:

Machine learning: Machine learning is a field of AI that focuses on developing algorithms that can learn from data without being explicitly programmed. Machine learning algorithms are used in a wide variety of applications, including spam filtering, image recognition, and natural language processing.
Deep learning: Deep learning is a type of machine learning that uses artificial neural networks to learn from data. Artificial neural networks are inspired by the human brain, and they have been shown to be very effective at learning complex tasks. Deep learning is used in a wide variety of applications, including image recognition, natural language processing, and speech recognition.
Expert systems: Expert systems are computer programs that are designed to mimic the reasoning of human experts. Expert systems are used in a wide variety of applications, including medical diagnosis, financial planning, and manufacturing.

AI is a rapidly growing field, and it is having a major impact on many different industries. AI is already being used to improve the efficiency of businesses, to develop new products and services, and to provide better healthcare. As AI continues to develop, it is likely to have an even greater impact on our lives.

Helping AI Speak

Large language models (LLMs) are a type of artificial intelligence (AI) that are trained on massive datasets of text and code. This allows them to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. LLMs are still under development, but they have the potential to be incredibly helpful in a wide range of applications.

LLMs are a powerful tool that can be used to power AI. They can be used to improve the performance of AI systems in a variety of ways, including:

Generating text: LLMs can be used to generate text that is more natural and coherent than text that is generated by traditional AI systems. This can be helpful for applications such as chatbots, machine translation, and text generation.
Translating languages: LLMs can be used to translate languages with a high degree of accuracy. This can be helpful for applications such as machine translation, cross-lingual search, and international communication.
Writing different kinds of creative content: LLMs can be used to write different kinds of creative content, such as poems, code, scripts, musical pieces, email, letters, etc. This can be helpful for applications such as content generation, creative writing, and entertainment.
* **Answering your questions in an informative way:** LLMs can be used to answer your questions in an informative way, even if they are open ended, challenging, or strange. This can be helpful for applications such as question answering, research, and education.

As LLMs continue to improve, they are likely to become even more powerful and versatile. They have the potential to revolutionize the way we interact with computers, and they could have a major impact on a wide range of industries.

Here are a few ways that LLMs can be helpful:

They can be used to generate text: LLMs can be used to generate text in a variety of styles, including news articles, blog posts, creative writing, and even code. This can be helpful for a variety of tasks, such as creating marketing materials, writing reports, or generating new ideas.
They can be used to translate languages: LLMs can be used to translate languages with a high degree of accuracy. This can be helpful for businesses that need to communicate with customers or partners in other countries, or for people who are learning a new language.
They can be used to write different kinds of creative content: LLMs can be used to write poems, code, scripts, musical pieces, email, letters, etc. This can be helpful for people who want to create creative content but don’t have the time or skills to do it themselves.
They can be used to answer your questions in an informative way: LLMs can be used to answer your questions in a comprehensive and informative way, even if they are open ended, challenging, or strange. This can be helpful for students, researchers, and anyone else who needs to find information quickly and easily.

LLMs are still under development, but they have the potential to be incredibly helpful in a wide range of applications.


Here are some additional benefits of using LLMs:

They can help you save time: LLMs can automate tasks that would otherwise take a lot of time, such as writing reports or translating documents. This can free up your time so you can focus on other things.
They can help you improve your productivity: LLMs can help you generate more creative and informative content, which can lead to improved results in your work.
They can help you learn new things: LLMs can be used to access and learn from a vast amount of information. This can help you stay up-to-date on current events, learn new skills, and expand your knowledge.

Overall, LLMs are a powerful tool that can be used to improve your productivity, creativity, and knowledge. As they continue to improve, they are likely to become even more useful in the years to come.

What’s Everyone Afraid Of?

There are a number of current fears about AI, including:

Job loss: AI is becoming increasingly capable of automating tasks that are currently performed by humans. This could lead to widespread job losses, particularly in low-skilled and repetitive jobs.
Bias: AI systems are trained on data that is created by humans. This means that they can reflect the biases that exist in society. For example, an AI system that is trained on a dataset of resumes that are mostly from men may be more likely to recommend men for jobs.
Misuse: AI systems can be used for malicious purposes, such as creating deepfakes or developing autonomous weapons.
Loss of control: As AI systems become more powerful, it may become more difficult for humans to control them. This could lead to a situation where AI systems make decisions that have negative consequences for humans.

It is important to note that these are just some of the current fears about AI. It is also important to remember that AI is a tool, and like any tool, it can be used for good or for evil. It is up to us to ensure that AI is used for the benefit of humanity, and not for its destruction.

Whether or not we should fear AI is a complex question with no easy answer. There are certainly some potential risks associated with AI, such as the possibility of job loss, bias, and misuse. However, there are also many potential benefits of AI, such as improved efficiency, productivity, and creativity.

Ultimately, the question of whether or not we should fear AI is a matter of perspective. If we focus on the potential risks, then it is easy to see why some people might be afraid of AI. However, if we focus on the potential benefits, then it is easy to see why others are optimistic about AI’s future.

Ethical AI


Here are some things we can do to ensure that AI is used for good:

Educate ourselves about AI: The more we know about AI, the better equipped we will be to make informed decisions about its use.
Promote responsible AI development: We need to encourage developers to create AI systems that are safe, ethical, and beneficial to society.
Support policies that protect people from the negative impacts of AI: We need to ensure that AI is used in a way that does not harm people, their jobs, or their privacy.

By taking these steps, we can help to ensure that AI is used for the benefit of humanity, and not for its destruction.

So what do you think about the current state of AI, and its future?

My Time With The Pixel Watch

When it comes to new devices, the norm for many years now has been for Content Creators (whether well-known or up and coming) to rush to get reviews out as soon as possible. Nearly everyone in the content creation space desperately craves to receive the oh-so-coveted Review Units that device makers release in wild in the hopes of to not only gain some real world usage data, but to also drum up some buzz about their devices. Content Creators not only have the opportunity to score some free new tech, but they also get a chance to claim the “hashtag FIRST crown” which helps develop their reputation of always being in touch with what’s new and hot.

I’m not necessarily criticizing any of this. In fact, I’ve always been a lover of beta testing software and gadgets – mostly because of my love of tinkering in general. And let’s face it, who doesn’t like free stuff? Also, having Content Creators get reviews out early enough can help consumers decide on purchases – especially if the creators have developed trustworthy track records.

But there are times when I wish that some reviews could be held off for a bit longer. I feel sometimes that some device reviews would be better served after a few months of usage, as opposed to several weeks, considering some of the changes that impact many of today’s devices primarily thanks to software updates and security patches, as well as things such as the availability of accessories that tend to be made available months after a device’s release. Often when a device maker has a launch event, many of the features and/or accessories are tagged as “coming soon”. Thus many of the “cool” and “oooh-ahh” things that we see onstage won’t be available for months to come.

Aside from those reasons, I feel that sometimes a few weeks of living with a device simply might not be enough time to truly gauge the value of the device. For example, some devices can be quick and snappy early on, but as more time goes by that may no longer be the case. In which case if Content Creators talk about how fast and responsive a device may be early on, there may be no indication of device sluggishness after a couple of months. Thankfully there are many Content Creators and media outlets who in fact provide followup reviews later down the line, but oftentimes many consumers only look at announcements that are new. Finally I’m willing to concede that much of this is based on my personal preferences, so a lot of what I’m saying here may seem to be much to do about nothing to many of you reading. However I just wanted to voice my opinion about the device review cycle as well as offer some context as to why I waited from the date I received my Pixel Watch (October 17th, 2022) until now (I’m writing this on February 11th, 2023) to give my impressions of it.

The Band Had To Go

Speaking of content norms, it’s typical for reviews like this to start with tech specs. Rather than list them all individually here, I’ll instead point you to Google’s Specs Page. There you’ll find everything you need to know from connectivity, glass type, and battery size.

But from the day the Pixel Watch was officially announced and revealed, I know my biggest issue would have nothing to do with the tech specs. Instead it would be with the included watch band. Specifically, the material that it’s made from. This would not be the first time I’ve had contact with this type of band (made from fluoroelastomer), so even though I knew I wanted the watch I also knew one of the first things I would have to do is replace the band as soon as possible. Simply put, I can’t wear these types of bands for prolonged periods as they cause skin irritation and sometimes discoloration, especially during summer months. Since the Pixel Watch was launched in October, I had plenty of time to grab a replacement band. But with that I ran into a slight hiccup.

The good news is that Google also announced alternative watch bands available in the Google Store. The bad news, in my opinion, is that they are all a bit pricey. So I did what many would do – I turned to Amazon. Going in, I knew specifically what I was looking for. I prefer more traditional metal watch bands, and found this $17 band from E ECSEM. This band is easily adjustable and can easily be attached and detached thanks to the Pixel Watch attachment design. You can view my full Amazon review here. Also, at time of this writing, the price has been reduced to $11.18! And yes – I’m still wearing this band daily.

What’s Not To Like

No beating around the bush here – simply put, the battery. Back when the first Apple Watch became widespread, I chuckled at the iOS faithful who complained about the battery life. After all, what good is a watch that requires you to charge it every night, and sometimes in the middle of the day? (Bear in mind, Apple claimed the watch could achieve 18 hours). As newer models were released, I still heard the same grumblings. In fact, it made me wonder if the whole “smart watch” craze was truly worth it. Sure – there are lots of cool features, sleep tracking, health tracking, etc. But was it all truly worth it if you have to charge them every night?

I began to wonder if there would ever be a worthy offering in the world of Android. Samsung began boasting 40 – 50 hours of battery life with the Galaxy Watch line, which is pretty impressive indeed. But I was already loyal to Google’s Pixel line (not just the phones, but I also owned the Pixel C as well as a Pixelbook), so I was hopeful a watch was son on the way. As it turns out, I had to wait a few more years for this to become a reality but I was hopeful to try a smartwatch just to get a feel for things.

I had read about the Amazfit Bip and gave it a shot. It was an interesting watch that sported a 14 -16 day battery life. It didn’t have any WearOS functionality – it was merely a bluetooth watch that could be used with either Android or iOS. It included it’s own proprietary app for sleep tracking, step tracking, even mobile payments. And above all else, it cost me a mere $35 at the time.

But I then discovered the Amazfit GTR-2 , pictured here to the right of the Pixel Watch. As you see, it has a large, beautiful display, integrates with Alexa, bluetooth calling, Blood Oxygen and Heart Rate tracking, and so on. But most importantly, it also features the impressive 14 – 16 day battery life that I can only WISH the Pixel Watch could achieve.

It is, in my opinion, one of the best smartwatch values for Android users. In fact, be sure to check out Amazfit’s entire line because I’m quite sure you’ll find something to fit your style and needs.

That being said, I was a bit disappointed when the Pixel Watch specs began rolling out, and it became known that it would have a 24-hour battery life. Yes, I still went ahead with it. I, like some reviewers I’ve read, have learned to schedule charging times during less impactful times of the day, which is typically first thing in the morning as I’m getting ready for work. The good news is that it does truly get a full 24 hours, so once I have it fully charged by 7:45AM I don’t have to worry about it again until 7:45AM the next morning. And it charges pretty quickly – in about an hour and a half it can go from about 26% to 100% easily.

Aside from the battery life, I was also a bit confused by Google somewhat ignoring Google Fit in favor of pushing a Fitbit Premium . Look, I get it – Google bought Fitbit so of course they’re going to market it as best they can. But I’ve been using Google Fit since its inception, and I personally have no reason to want to switch. Also, I like the Google Fit interface because it’s clean and simple. Thankfully there’s Health Connect , so I can have Fitbit running and collecting the data I want but I can view it all in Google Fit.

The Final Verdict

Overall, I truly love the Pixel Watch. I love the style, the customization options, and all of the general Googliness! I think there’s room for some improvements that can hopefully be done with software updates (case in point, Bedtime Mode should enable automatically like it does on the Pixel Phone). I think it’s a great value at its price point, and other than the battery life I think did a great job with this latest induction into the Pixel family.