Recent health reports highlight the emerging use of AI and its associations with wellbeing. As we look to evolving research to better understand its impact, it is important to recognize that AI is not all “good” or “bad.” There are numerous complexities to understanding its applications, and important considerations to take into account when making an informed decision about our use of AI or AI-assisted technology. 

One particular area of adoption is related to the impact that chatbots may have on our community and our mental health. A 2024 survey found that about 25% of young adults reported that they used AI chatbots for health advice or information. Additionally, a 2025 survey reported that while 33% of teens used AI “companions” for social interaction and relationships, 12% of those teens used it for emotional or mental health support. However, research has found that greater AI use was associated with higher levels of depressive symptoms compared to those who did not use AI which suggests that further exploration of potential risks and benefits are needed.  

While people may use AI tools for a variety of helpful reasons, we encourage students to be mindful during their interactions with chatbots, companions, apps, or other AI functions, especially when speaking about mental or emotional concerns. 

The American Psychological Association (APA) offers several recommendations for interacting with AI for mental health, and we have chosen to share some of those with you here. 

  • Do not rely on GenAI chatbots or wellness apps to deliver psychotherapy or psychological treatment

A recent study by Stanford University highlights that AI tools may provide misinformation about mental health concerns, including potentially increasing stigma and engaging with users in ways that do not align with best practices in the field. Chatbots are often designed to be deferential to its users, whereas therapy and therapeutic interventions typically challenge clients in important ways about biases, assumptions, or unhelpful thought patterns. If you are seeking mental health support, please remember that the Counseling Center offers a variety of free and confidential therapeutic services.

  • Prevent unhealthy relationships and dependencies between users and Gen AI chatbots and apps

A recent APA health advisory highlights how maintaining healthy boundaries while engaging in simulated human relationships can be challenging due to simulated empathy, persuasive intent, and blurred lines between human and artificial interaction. Additionally, dependency can interfere with the development of healthy real-world relationships. We recognize how important connection is, especially during your time at Lafayette. If you find yourself struggling to find community, the Office of Student Involvement has many different resources and opportunities to help you create meaningful relationships. The Counseling Center also offers a selection of groups and workshops focused on building connection, as well as anonymous, online peer-to-peer support that is clinically moderated via Togetherall.  

  • Prioritize privacy and protect user data

AI tools likely collect large amounts of sensitive data about users, and it may be unclear how detailed profiles about you and your use are managed. There has been concern about how this data may be used to influence people’s behavior. When sharing more detailed personal information, you may be more vulnerable to manipulation. By being cognizant of what information you are sharing with AI and the ways in which this data may be used, you can protect your personal autonomy (including cognitive liberty) and reduce the likelihood of manipulation.

Our goal in bringing this information to your awareness is to empower you to make informed choices about safely engaging with AI while maintaining your overall health and wellbeing. If you have any concerns regarding your use of AI or AI-assisted technology, please don’t hesitate to reach out to the Counseling Center for support.