Navigating the AI: The Role of Digital Assistants in Personal Decision-Making

Navigating the AI: The Role of Digital Assistants in Personal Decision-Making

In a recent episode of the podcast "Uncanny Valley," Zoë Schiffer, Michael Calore, and Lauren Goode delved into the intriguing world of AI assistants. The conversation centered around their personal experiences with these digital helpers, exploring both the potential and the limitations they encountered. Zoë Schiffer, who embarked on a unique experiment to see if AI could replace human expertise, shared insights from her journey with her 4-month-old daughter, Cleo. In 2022, she had paid $500 for a sleep expert to train her daughter. Now, she was curious to see if AI assistants like ChatGPT and Character.ai could offer similar guidance.

Zoë Schiffer's experiment with AI assistants aimed to determine their effectiveness in providing sleep training advice for her daughter. However, she discovered that while these digital tools were adept at delivering general information, they often fell short in offering specific, practical guidance. The AI responses tended to be filled with "fluffy language," lacking the empathy and human touch she found valuable. Despite this, she ultimately decided she no longer needed a human expert.

Michael Calore, another participant in the discussion, shared his own attempt to utilize an AI assistant named Gemini for health-related advice. He sought tips on workout routines and menu planning but found that the AI's guidance was not particularly helpful.

"It's great that you're focusing on your health and weight loss goals." – Gemini

While he appreciated Gemini's encouragement, Michael noted that the advice was general and struggled to provide the specificity he desired.

Lauren Goode also contributed to the conversation by sharing her own experiences with AI assistants, highlighting both their advantages and shortcomings. Throughout their discussion, the trio underscored the current limitations of AI in providing personalized advice and support.

Zoë Schiffer's interaction with Character.ai during her sleep training experiment illustrated some of these limitations. The AI offered well-meaning suggestions but lacked depth and proactive engagement.

"It's great to hear that Cleo slept OK and that she was able to soothe herself after five minutes of crying for both nighttime and the midnight wake-up." – Character.ai

"That's a good sign. As for having her in the same room, I totally understand your concerns." – Character.ai

"You can try a few things to make it easier on you. One, use a white noise machine or fan to drown out some of the crying." – Character.ai

These responses, while supportive, exemplified the AI's inability to provide nuanced solutions tailored to specific situations. Zoë noted that the AI did not ask questions or seek clarification, which limited its usefulness.

Michael's experience with Gemini echoed similar sentiments. Although the AI assistant provided foundational advice on healthy living:

"Here's a combination of general and specific advice to help you get started and see progress in the next month." – Gemini

"General advice for healthy weight loss, focus on sustainable changes." – Gemini

"Prioritize whole foods. Build your diet around nutrient-dense, unprocessed foods like fruits, vegetables, lean proteins, and whole grains." – Gemini

"Be mindful of portions. Even healthy foods can contribute to weight gain if you eat too much." – Gemini

It failed to offer actionable insights or adapt its recommendations based on individual needs. This left Michael questioning the practicality of relying solely on AI for health-related decisions.

Tags

Leave a Reply

Your email address will not be published. Required fields are marked *

About Author

Alex Lorel

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua veniam.

Categories

Tags