<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Current Research | DAMIEN LEKKAS</title>
    <link>http://damienlekkas.com/project/</link>
      <atom:link href="http://damienlekkas.com/project/index.xml" rel="self" type="application/rss+xml" />
    <description>Current Research</description>
    <generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Sun, 28 Jul 2024 00:00:00 +0000</lastBuildDate>
    
    
    <item>
      <title>Acute suicidal ideation in context: Highlighting sentiment-based markers through the diary entries of a clinically depressed sample</title>
      <link>http://damienlekkas.com/project/acute-si-nlp/</link>
      <pubDate>Sun, 28 Jul 2024 00:00:00 +0000</pubDate>
      <guid>http://damienlekkas.com/project/acute-si-nlp/</guid>
      <description>&lt;p&gt;Given the demonstrated utility of smartphone-based EMA and NLP-based applications in mental health and suicide research, as well as a demonstrated need to develop more temporally sensitive models of acute suicidal ideation (SI), the current work aimed to apply a sentiment analysis approach
to explore how language reflected in diary entries is tied to acute changes in self-report SI severity.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>ChatGPT as Therapy: A statistical and network-based thematic profiling of shared experiences, attitudes, and beliefs on Reddit</title>
      <link>http://damienlekkas.com/project/reddit-chatgpt/</link>
      <pubDate>Sun, 28 Jul 2024 00:00:00 +0000</pubDate>
      <guid>http://damienlekkas.com/project/reddit-chatgpt/</guid>
      <description>&lt;p&gt;Large language models (LLMs), including ChatGPT have exponentially grown in their utility over the past few years. For example, LLMs have frequently been endorsed as a suitable alternative or adjunct for traditional therapies for persons who experience mental health problems and are seeking help. Given the existing barriers to receiving adequate mental health services, LLMs provide a unique tool that may be used as a precursor, adjunct, or alternative to traditional therapy. Moreover, prior work has indicated that LLMs may have the capability to generate therapeutic or empathetic human-like responses to users.  Although there is potential for LLMs to provide benefits for persons seeking help with their mental health, relatively little is known about the views and attitudes of LLMs by persons who have endorsed using them for their mental health. Thus, the purpose of the current study was to provide a deeper investigation into the positive and negative views endorsed by persons who reported using LLMs, including ChatGPT, for their mental health.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Automatic descriptive classification of self-reported traumatic experiences: A primer and example in applying a pretrained large language model to study psychological phenomenology.</title>
      <link>http://damienlekkas.com/project/trauma-nlp/</link>
      <pubDate>Thu, 27 Jul 2023 00:00:00 +0000</pubDate>
      <guid>http://damienlekkas.com/project/trauma-nlp/</guid>
      <description>&lt;p&gt;A large cohort of &lt;em&gt;N&lt;/em&gt;=1,473 individuals recruited through Google Ads for pre-screening as part of a &lt;a href=&#34;https://trackingdepression.com/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;larger study&lt;/a&gt; on major depressive disorder (MDD) is being used. Written responses to
an open-ended prompt regarding the worst event experienced are descriptively classified via a novel 31-item inventory which allows for structured qualitative coding. Written responses are applied to a
pre-trained large language model (BERT) to automatically generate latent representations of these self-reported experiences. A dimensionally reduced form of these latent representations are then used
as features to train, validate, and test a series of classification-based machine learning models which, in total, summarize key themes of the reported negative experience. This pipeline is then used to
explore the relative importance of different aspects of negative experiences (e.g., who was affected, who was responsible, when did the experience occur, what trauma was sustained, was the event personally witnessed)
by treating them as features in a new set of models tasked with predicting PCL-5-related outcomes.&lt;/p&gt;
&lt;p&gt;The primary goal of this work is to provide a detailed example of how pretrained large language models can be leveraged to profile mental health and address questions in psychological science using real-world data.
Through highlighting and explaining major decision points along the way, from data preprocessing to results interpretation, this work may serve as a starting point for researchers to employ large language models in their own work.&lt;/p&gt;
</description>
    </item>
    
  </channel>
</rss>
