Search
Clear search
Close search
Main menu
Google apps
68 datasets found
  1. E

    EASIER Emotion Detection from Text

    • live.european-language-grid.eu
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    EASIER Emotion Detection from Text [Dataset]. https://live.european-language-grid.eu/catalogue/tool-service/23050
    Explore at:
    License

    https://www.gnu.org/licenses/gpl-3.0-standalone.htmlhttps://www.gnu.org/licenses/gpl-3.0-standalone.html

    Description

    This repository contains data and code for emotion detection in text, developed in the EASIER project on sign language translation. The end goal within the project is to provide emotional cues to an avatar for the translation direction from spoken to sign language. While the EASIER project is multilingual, emotion detection so far is restricted to German.

  2. h

    emotion

    • huggingface.co
    Updated Jul 14, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DAIR.AI (2020). emotion [Dataset]. https://huggingface.co/datasets/dair-ai/emotion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 14, 2020
    Dataset provided by
    DAIR.AI
    License

    https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/

    Description

    Dataset Card for "emotion"

      Dataset Summary
    

    Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.

      Supported Tasks and Leaderboards
    

    More Information Needed

      Languages
    

    More Information Needed

      Dataset Structure
    
    
    
    
    
    
    
      Data Instances
    

    An example looks as follows. { "text": "im feeling quite sad… See the full description on the dataset page: https://huggingface.co/datasets/dair-ai/emotion.

  3. h

    speech-emotion-recognition-dataset

    • huggingface.co
    Updated Jul 14, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Training Data (2023). speech-emotion-recognition-dataset [Dataset]. https://huggingface.co/datasets/TrainingDataPro/speech-emotion-recognition-dataset
    Explore at:
    Dataset updated
    Jul 14, 2023
    Authors
    Training Data
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    The audio dataset consists of a collection of texts spoken with four distinct emotions. These texts are spoken in English and represent four different emotional states: euphoria, joy, sadness and surprise. Each audio clip captures the tone, intonation, and nuances of speech as individuals convey their emotions through their voice. The dataset includes a diverse range of speakers, ensuring variability in age, gender, and cultural backgrounds*, allowing for a more comprehensive representation of the emotional spectrum. The dataset is labeled and organized based on the emotion expressed in each audio sample, making it a valuable resource for emotion recognition and analysis. Researchers and developers can utilize this dataset to train and evaluate machine learning models and algorithms, aiming to accurately recognize and classify emotions in speech.

  4. f

    Emoton detection for Bangla language

    • figshare.com
    txt
    Updated Oct 5, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sadia Afrin Purba (2020). Emoton detection for Bangla language [Dataset]. http://doi.org/10.6084/m9.figshare.13052789.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Oct 5, 2020
    Dataset provided by
    figshare
    Authors
    Sadia Afrin Purba
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In this dataset, there are Bangla sentences manually labeled with three emotions- Happy, Sad, and Angry.

  5. Bangla YouTube Sentiment and Emotion datasets

    • kaggle.com
    zip
    Updated Sep 9, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nit003 (2019). Bangla YouTube Sentiment and Emotion datasets [Dataset]. https://www.kaggle.com/datasets/nit003/bangla-youtube-sentiment-and-emotion-datasets
    Explore at:
    zip(439462 bytes)Available download formats
    Dataset updated
    Sep 9, 2019
    Authors
    nit003
    Area covered
    YouTube
    Description

    Context

    This dataset is used in Multilabel sentiment analysis and emotion detection for YouTube comments in different kinds of Bengali videos.

    Content

    There are two files in the folder. There are might be multiple comments with same text. Also it may be noted that, the comments collected here contain abusive and vulgar words, slangs and personal attack. Therefore, we ensure that all annotators are adults.

    Sentiment.csv

    Id - Unique id number for the comment. Text - Text of the data Label - 1 (3 class label) or 2 (5 class label) Score - Denotes the polarity of the comment. In three class labelling : 1(positive), 0 (neutral), -1(negative) In three class labelling : 2 (highly positive), 1(positive), 0 (neutral), -1(negative), -2(highly negative) Lan - Language of the comment. EN (English), BN (Bengali), RN (Romanized Bangla) Domain - Category of the video.

    Emotion.csv

    Id - Unique id number for the comment. Text - Text of the data emotion - Corresponding emotion of the comment. Anger/Joy/Disgust/Fear/Surprise/Sad/None (no emotion found) Lan - Language of the comment. EN (English), BN (Bengali), RN (Romanized Bangla) Domain - Category of the video.

    Acknowledgements

    If you use the dataset in any research work, please cite the following paper as

    N. Irtiza Tripto and M. Eunus Ali, "Detecting Multilabel Sentiment and Emotions from Bangla YouTube Comments," 2018 International Conference on Bangla Speech and Language Processing (ICBSLP), Sylhet, 2018, pp. 1-6.

    doi: 10.1109/ICBSLP.2018.8554875

    Inspiration

    It will be helpful for researchers specially in analyzing sentiments from social media in non-English language

  6. Sentiment Analysis Word Lists Dataset

    • kaggle.com
    Updated Oct 23, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Prajwal Kanade (2023). Sentiment Analysis Word Lists Dataset [Dataset]. https://www.kaggle.com/datasets/prajwalkanade/sentiment-analysis-word-lists-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 23, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Prajwal Kanade
    Description

    Dataset Description:

    This dataset comprises two text files, one containing a list of positive words and the other a list of negative words. These files are intended to serve as essential resources for sentiment analysis and natural language processing tasks.

    • Positive Words File: This file contains a collection of words and terms that typically convey positive sentiment or emotions. These words are often associated with happiness, satisfaction, approval, or positive experiences.

    • Negative Words File: The second file includes a compilation of words and phrases that commonly express negative sentiments or emotions. These words are often related to displeasure, disappointment, criticism, or negative experiences.

    Use Cases:

    The dataset can be used in various applications and research areas, including:

    1. Sentiment Analysis: Researchers and developers can employ these word lists to help assess the sentiment of text data and categorize it as positive, negative, or neutral.

    2. Text Classification: These datasets can be utilized in text classification tasks, such as determining the sentiment of product reviews, social media posts, or customer feedback.

    3. Emotion Detection: The words in these lists can aid in emotion detection and understanding the emotional tone of text.

    4. NLP Research: Linguists and researchers in the field of natural language processing (NLP) may use these datasets to analyze language patterns and emotional expression in texts.

    5. Content Analysis: Businesses and organizations can leverage this data to gain insights into public opinion, brand perception, and content analysis.

    These lists of positive and negative words serve as valuable references for sentiment-related analyses, enabling automated assessment of the emotional content of textual data in a wide range of applications.

  7. Z

    EmoLit

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jun 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rei, Luis (2023). EmoLit [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7883953
    Explore at:
    Dataset updated
    Jun 27, 2023
    Dataset authored and provided by
    Rei, Luis
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Emotions in Literature

    Description Literature sentences from Project Gutenberg. 38 emotion labels (+neutral examples). Semi-Supervised dataset.

    Article

    Detecting Fine-Grained Emotions in Literature

    Please cite:

    @Article{app13137502, AUTHOR = {Rei, Luis and Mladenić, Dunja}, TITLE = {Detecting Fine-Grained Emotions in Literature}, JOURNAL = {Applied Sciences}, VOLUME = {13}, YEAR = {2023}, NUMBER = {13}, ARTICLE-NUMBER = {7502}, URL = {https://www.mdpi.com/2076-3417/13/13/7502}, ISSN = {2076-3417}, DOI = {10.3390/app13137502} }

    Abstract

    Emotion detection in text is a fundamental aspect of affective computing and is closely linked to natural language processing. Its applications span various domains, from interactive chatbots to marketing and customer service. This research specifically focuses on its significance in literature analysis and understanding. To facilitate this, we present a novel approach that involves creating a multi-label fine-grained emotion detection dataset, derived from literary sources. Our methodology employs a simple yet effective semi-supervised technique. We leverage textual entailment classification to perform emotion-specific weak-labeling, selecting examples with the highest and lowest scores from a large corpus. Utilizing these emotion-specific datasets, we train binary pseudo-labeling classifiers for each individual emotion. By applying this process to the selected examples, we construct a multi-label dataset. Using this dataset, we train models and evaluate their performance within a traditional supervised setting. Our model achieves an F1 score of 0.59 on our labeled gold set, showcasing its ability to effectively detect fine-grained emotions. Furthermore, we conduct evaluations of the model's performance in zero- and few-shot transfer scenarios using benchmark datasets. Notably, our results indicate that the knowledge learned from our dataset exhibits transferability across diverse data domains, demonstrating its potential for broader applications beyond emotion detection in literature. Our contribution thus includes a multi-label fine-grained emotion detection dataset built from literature, the semi-supervised approach used to create it, as well as the models trained on it. This work provides a solid foundation for advancing emotion detection techniques and their utilization in various scenarios, especially within the cultural heritage analysis.

    Labels

    • admiration: finds something admirable, impressive or worthy of respect

    • amusement: finds something funny, entertaining or amusing

    • anger: is angry, furious, or strongly displeased; displays ire, rage, or wrath

    • annoyance: is annoyed or irritated

    • approval: expresses a favorable opinion, approves, endorses or agrees with something or someone

    • boredom: feels bored, uninterested, monotony, tedium

    • calmness: is calm, serene, free from agitation or disturbance, experiences emotional tranquility

    • caring: cares about the well-being of someone else, feels sympathy, compassion, affectionate concern towards someone, displays kindness or generosity

    • courage: feels courage or the ability to do something that frightens one, displays fearlessness or bravery

    • curiosity: is interested, curious, or has strong desire to learn something

    • desire: has a desire or ambition, wants something, wishes for something to happen

    • despair: feels despair, helpless, powerless, loss or absence of hope, desperation, despondency

    • disappointment: feels sadness or displeasure caused by the non-fulfillment of hopes or expectations, being or let down, expresses regret due to the unfavorable outcome of a decision

    • disapproval: expresses an unfavorable opinion, disagrees or disapproves of something or someone

    • disgust: feels disgust, revulsion, finds something or someone unpleasant, offensive or hateful

    • doubt: has doubt or is uncertain about something, bewildered, confused, or shows lack of understanding

    • embarrassment: feels embarrassed, awkward, self-conscious, shame, or humiliation

    • envy: is covetous, feels envy or jealousy; begrudges or resents someone for their achievements, possessions, or qualities

    • excitement: feels excitement or great enthusiasm and eagerness

    • faith: expresses religious faith, has a strong belief in the doctrines of a religion, or trust in god

    • fear: is afraid or scared due to a threat, danger, or harm

    • frustration: feels frustrated: upset or annoyed because of inability to change or achieve something

    • gratitude: is thankful or grateful for something

    • greed: is greedy, rapacious, avaricious, or has selfish desire to acquire or possess more than what one needs

    • grief: feels grief or intense sorrow, or grieves for someone who has died

    • guilt: feels guilt, remorse, or regret to have committed wrong or failed in an obligation

    • indifference: is uncaring, unsympathetic, uncharitable, or callous, shows indifference, lack of concern, coldness towards someone

    • joy: is happy, feels joy, great pleasure, elation, satisfaction, contentment, or delight

    • love: feels love, strong affection, passion, or deep romantic attachment for someone

    • nervousness: feels nervous, anxious, worried, uneasy, apprehensive, stressed, troubled or tense

    • nostalgia: feels nostalgia, longing or wistful affection for the past, something lost, or for a period in one's life, feels homesickness, a longing for one's home, city, or country while being away; longing for a familiar place

    • optimism: feels optimism or hope, is hopeful or confident about the future, that something good may happen, or the success of something - pain: feels physical pain or is experiences physical suffering

    • pride: is proud, feels pride from one's own achievements, self-fulfillment, or from the achievements of those with whom one is closely associated, or from qualities or possessions that are widely admired

    • relief: feels relaxed, relief from tension or anxiety

    • sadness: feels sadness, sorrow, unhappiness, depression, dejection

    • surprise: is surprised, astonished or shocked by something unexpected

    • trust: trusts or has confidence in someone, or believes that someone is good, honest, or reliable

    Dataset

    EmoLit (Zenodo)

    Code

    EmoLit Train (Github)

    Models

  8. Emotion Detection and Recognition Market Size, Trends [2031]

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2023). Emotion Detection and Recognition Market Size, Trends [2031] [Dataset]. https://growthmarketreports.com/report/emotion-detection-and-recognition-market-global-industry-analysis
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Oct 3, 2023
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    The Global Emotion Detection and Recognition Market size was valued at USD 26.15 Billion in 2022 and is expected to surpass USD 140.07 Billion by 2031, expanding at a CAGR of 20.5% during the forecast period, 2023 – 2031. Emotion detection and recognition involves a process of deployment of advanced technology and software that are primarily designed to read or detect human emotions. It usually evaluates emotions through using image processing technology that examines facial expression. Emotions have a significant impact on how we behave and think and understanding emotion of a consumer or patient is a vital aspect for various business industries.



    Social communication is an important part of our lives and emotional expressions help us comprehend the mental state of people around us, allowing us to communicate more effectively. Emotion detection and recognition research has gained traction as a result of its wide applications in perceptual and cognitive sciences as well as affective computing and computer animation. This technology is extensively used by businesses to detect consumer opinions that may have an impact on their purchasing and consumption behavior, which have a major impact on the level of production. The law enforcement sector uses several technique for human monitoring and tracking using a lie detector test.





    Market Trends, Drivers, Restraints, and Opportunities:





    • Increasing awareness regarding benefits of emotion recognition systems in the surveillance sector and its wide applications across several industries propel market growth.




    • Rising introduction of innovative technology in wearable devices and expansion of end-users of the technology for various emotional detection applications are expected to fuel the market expansion.




    • Growing demand for enhanced client experiences as well as the requirement for a human touch in digital interactions are key factors driving market growth.




    • High cost of the device systems and issue related to misinterpretation of emotional analysis are major hindrance to the growth of the market.




    • <spa

  9. Multimodal dataset or emotion recognition

    • figshare.com
    bin
    Updated Jun 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Наталя Шаховська (2023). Multimodal dataset or emotion recognition [Dataset]. http://doi.org/10.6084/m9.figshare.23596362.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Jun 28, 2023
    Dataset provided by
    figshare
    Authors
    Наталя Шаховська
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    video, audio and text collection

  10. Emotions dataset for NLP + Neutral Emotion

    • kaggle.com
    Updated Jan 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thuan Naheem Pakeer (2024). Emotions dataset for NLP + Neutral Emotion [Dataset]. https://www.kaggle.com/datasets/thuannaheempakeer/emotions-dataset-for-nlp-neutral-emotion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 1, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Thuan Naheem Pakeer
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    This dataset advances the "Emotions dataset for NLP" by Praveen (https://www.kaggle.com/datasets/praveengovi/emotions-dataset-for-nlp/data).

    This dataset contains an additional neutral emotion label, which was obtained from the "Emotion Detection from Text" by Pashupati Gupta (https://www.kaggle.com/datasets/pashupatigupta/emotion-detection-from-text).

  11. f

    Data from: ‘MONOVAB’: An Annotated Corpus for Bangla Multi-label Emotion...

    • figshare.com
    txt
    Updated Sep 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dr. Sumit Kumar Banshal; Sajal Das; Shumaiya Akter Shammi; Narayan Ranjan Chakraborty (2023). ‘MONOVAB’: An Annotated Corpus for Bangla Multi-label Emotion Detection [Dataset]. http://doi.org/10.6084/m9.figshare.24199260.v3
    Explore at:
    txtAvailable download formats
    Dataset updated
    Sep 27, 2023
    Dataset provided by
    figshare
    Authors
    Dr. Sumit Kumar Banshal; Sajal Das; Shumaiya Akter Shammi; Narayan Ranjan Chakraborty
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The most popular news portal's Facebook pages such as Prothom Alo, BBC Bangla, BD News 24, Bangla Tribune, Kaler Kantho, Daily Jugantor are picked to build the dataset. Following a manual collection of posts, a total of 130 posts for 11 news topics were obtained and converted into a CSV file. The dataset is annotated in Ekman's seven universal emotions and they are collected using a self-developed scraper algorithm.

  12. v

    Emotion AI Market By Solutions (Emotion Recognition, Emotion Al SDKs and...

    • verifiedmarketresearch.com
    Updated Feb 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    VERIFIED MARKET RESEARCH (2025). Emotion AI Market By Solutions (Emotion Recognition, Emotion Al SDKs and APIs), By Type (Text-Focused, Voice-Focused), Technology (Machine Learning, NLP), & Region for 2025-2032 [Dataset]. https://www.verifiedmarketresearch.com/product/emotion-ai-market/
    Explore at:
    Dataset updated
    Feb 12, 2025
    Dataset authored and provided by
    VERIFIED MARKET RESEARCH
    License

    https://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/

    Time period covered
    2025 - 2032
    Area covered
    Global
    Description

    Emotion AI Market size was valued at USD 5.23 Billion in 2024 and is projected to reach USD 28.63 Billion by 2032, growing at a CAGR of 22.5% from 2025 to 2032.

    The Emotion AI market is driven by the rising demand for enhanced customer experience across industries like retail, healthcare, and automotive. Businesses are leveraging AI-driven emotion recognition to personalize interactions, improve engagement, and boost sales. The growing adoption of AI in sentiment analysis, chatbots, and virtual assistants further fuels market expansion.

    Advancements in deep learning, facial recognition, and natural language processing (NLP) are accelerating the development of Emotion AI. Increasing investments in AI research and the integration of emotion recognition in smart devices, security systems, and entertainment platforms are also key growth drivers.

  13. P

    BanglaEmotion Dataset

    • paperswithcode.com
    Updated Jul 17, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). BanglaEmotion Dataset [Dataset]. https://paperswithcode.com/dataset/banglaemotion
    Explore at:
    Dataset updated
    Jul 17, 2019
    Description

    BanglaEmotion is a manually annotated Bangla Emotion corpus, which incorporates the diversity of fine-grained emotion expressions in social-media text. More fine-grained emotion labels are considered such as Sadness, Happiness, Disgust, Surprise, Fear and Anger - which are, according to Paul Ekman (1999), the six basic emotion categories. For this task, a large amount of raw text data are collected from the user’s comments on two different Facebook groups (Ekattor TV and Airport Magistrates) and from the public post of a popular blogger and activist Dr. Imran H Sarker. These comments are mostly reactions to ongoing socio-political issues and towards the economic success and failure of Bangladesh. A total of 32923 comments are scraped from the three sources aforementioned above. Out of these, a total of 6314 comments were annotated into the six categories. The distribution of the annotated corpus is as follows:

    sad = 1341 happy = 1908 disgust = 703 surprise = 562 fear = 384 angry = 1416

    A balanced set is also provided from the above data and split the dataset into training and test set of equal ratio. A proportion of 5:1 is used for training and evaluation purposes. More information on the dataset and the experiments on it could be found in our paper (related links below).

  14. E

    Emotion Recognition and Sentiment Analysis Software Market Report

    • marketreportanalytics.com
    doc, pdf, ppt
    Updated Mar 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Report Analytics (2025). Emotion Recognition and Sentiment Analysis Software Market Report [Dataset]. https://www.marketreportanalytics.com/reports/emotion-recognition-and-sentiment-analysis-software-market-11382
    Explore at:
    ppt, pdf, docAvailable download formats
    Dataset updated
    Mar 19, 2025
    Dataset authored and provided by
    Market Report Analytics
    License

    https://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The Emotion Recognition and Sentiment Analysis Software Market is experiencing robust growth, projected to reach $849.76 million in 2025 and maintain a Compound Annual Growth Rate (CAGR) of 14.15% from 2025 to 2033. This expansion is fueled by several key drivers. Increasing adoption of AI-powered solutions across diverse sectors, including customer service, market research, and healthcare (patient diagnosis), is a primary factor. Businesses leverage these tools to gain valuable insights into customer preferences, improve product development, and personalize user experiences. The rise of cloud-based deployment models further accelerates market growth, offering scalability, cost-effectiveness, and enhanced accessibility. Furthermore, the growing need for effective brand monitoring and reputation management, particularly on social media, is driving demand for sentiment analysis tools. While data privacy concerns and ethical considerations surrounding emotion recognition technology pose certain restraints, the overall market outlook remains exceptionally positive. The market is segmented by application (customer service/experience, product/market research, patient diagnosis, others) and deployment (on-premises, cloud-based), reflecting the diverse use cases and deployment preferences of different industries. North America currently holds a significant market share, driven by early adoption and technological advancements. However, APAC is expected to exhibit substantial growth in the coming years, fueled by increasing digitalization and a burgeoning tech industry in countries like China and Japan. Leading companies are focusing on strategic partnerships, acquisitions, and the development of innovative solutions to maintain a competitive edge in this rapidly evolving landscape. The competitive landscape is characterized by a mix of established tech giants like Microsoft and IBM alongside specialized emotion AI companies. The market’s success hinges on the continuous improvement of algorithm accuracy, addressing ethical concerns, and ensuring responsible data handling. Future growth will depend on advancements in deep learning and computer vision, enabling more nuanced and accurate emotion recognition across various modalities, including facial expressions, voice tone, and text analysis. Addressing data bias and ensuring compliance with data privacy regulations are crucial for sustainable growth. The market's segmentation reflects its adaptability across various industries, underscoring its potential for widespread application and sustained expansion throughout the forecast period.

  15. RAVDESS Facial Landmark Tracking

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Oct 19, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Riley Swanson; Steven R. Livingstone; Steven R. Livingstone; Frank A. Russo; Frank A. Russo; Riley Swanson (2024). RAVDESS Facial Landmark Tracking [Dataset]. http://doi.org/10.5281/zenodo.3255102
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 19, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Riley Swanson; Steven R. Livingstone; Steven R. Livingstone; Frank A. Russo; Frank A. Russo; Riley Swanson
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Contact Information

    If you would like further information about the RAVDESS Facial Landmark Tracking data set, or if you experience any issues downloading files, please contact us at ravdess@gmail.com.

    Tracking Examples

    Watch a sample of the facial tracking results.

    Description

    The RAVDESS Facial Landmark Tracking dataset set contains tracked facial landmark movements from the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) [RAVDESS Zenodo page]. Motion tracking of actors' faces was produced by OpenFace 2.1.0 (Baltrusaitis, T., Zadeh, A., Lim, Y. C., & Morency, L. P., 2018). Tracked information includes: facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.

    The Facial Landmark Tracking dataset was created in the Affective Data Science Lab.

    This data set contains tracking for all 2452 RAVDESS trials. All tracking movement data are contained in "FacialTracking_Actors_01-24.zip", which contains 2452 .CSV files. Each actor has 104 tracked trials (60 speech, 44 song). Note, there are no song files for Actor 18.

    Total Tracked Files = (24 Actors x 60 Speech trials) + (23 Actors x 44 Song trials) = 2452 files.

    Tracking results for each trial are provided as individual comma separated value files (CSV format). File naming convention of tracked files is identical to that of the RAVDESS. For example, tracked file "01-01-01-01-01-01-01.csv" corresponds to RAVDESS audio-video file "01-01-01-01-01-01-01.mp4". For a complete description of the RAVDESS file naming convention and experimental manipulations, please see the RAVDESS Zenodo page.

    Tracking overlay videos for all trials are also provided (720p Xvid, .avi), one zip file per Actor. As the RAVDESS does not contain "ground truth" facial landmark locations, the overlay videos provide a visual 'sanity check' for researchers to confirm the general accuracy of the tracking results. The file naming convention of tracking overlay videos also matches that of the RAVDESS. For example, tracking video "01-01-01-01-01-01-01.avi" corresponds to RAVDESS audio-video file "01-01-01-01-01-01-01.mp4".

    Tracking File Output Format

    This data set retained OpenFace's data output format, described here in detail. The resolution of all input videos was 1280x720. When tracking output units are in pixels, their range of values is (0,0) (top left corner) to (1280,720) (bottom right corner).

    Columns 1-3 = Timing and Detection Confidence

    • 1. Frame - The number of the frame (source videos 30 fps), range = 1 to n
    • 2. Timestamp - Time of frame, range = 0 to m
    • 3. Confidence - Tracker confidence level in current landmark detection estimate, range = 0 to 1

    Columns 4-291 = Eye Gaze Detection

    • 4-6. gaze_0_x, gaze_0_y, gaze_0_z - Eye gaze direction vector in world coordinates for eye 0 (normalized), eye 0 is the leftmost eye in the image (think of it as a ray going from the left eye in the image in the direction of the eye gaze).
    • 7-9. gaze_1_x, gaze_1_y, gaze_1_z - Eye gaze direction vector in world coordinates for eye 1 (normalized), eye 1 is the rightmost eye in the image (think of it as a ray going from the right eye in the image in the direction of the eye gaze).
    • 10-11. gaze_angle_x, gaze_angle_y - Eye gaze direction in radians in world coordinates, averaged for both eyes. If a person is looking left-right this will results in the change of gaze_angle_x (from positive to negative) and, if a person is looking up-down this will result in change of gaze_angle_y (from negative to positive), if a person is looking straight ahead both of the angles will be close to 0 (within measurement error).
    • 12-123. eye_lmk_x_0, ..., eye_lmk_x55, eye_lmk_y_0,..., eye_lmk_y_55 - Location of 2D eye region landmarks in pixels. A figure describing the landmark index can be found here.
    • 124-291. eye_lmk_X_0, ..., eye_lmk_X55, eye_lmk_Y_0,..., eye_lmk_Y_55,..., eye_lmk_Z_0,..., eye_lmk_Z_55 - Location of 3D eye region landmarks in millimeters. A figure describing the landmark index can be found here.

    Columns 292-297 = Head pose

    • 292-294. pose_Tx, pose_Ty, pose_Tz - Location of the head with respect to camera in millimeters (positive Z is away from the camera).
    • 295-297. pose_Rx, pose_Ry, pose_Rz - Rotation of the head in radians around X,Y,Z axes with the convention R = Rx * Ry * Rz, left-handed positive sign. This can be seen as pitch (Rx), yaw (Ry), and roll (Rz). The rotation is in world coordinates with the camera being located at the origin.

    Columns 298-433 = Facial Landmarks locations in 2D

    • 298-433. x_0, ..., x_67, y_0,...y_67 - Location of 2D landmarks in pixels. A figure describing the landmark index can be found here.

    Columns 434-637 = Facial Landmarks locations in 3D

    • 434-637. X_0, ..., X_67, Y_0,..., Y_67, Z_0,..., Z_67 - Location of 3D landmarks in millimetres. A figure describing the landmark index can be found here. For these values to be accurate, OpenFace needs to have good estimates for fx,fy,cx,cy.

    Columns 638-677 = Rigid and non-rigid shape parameters

    Parameters of a point distribution model (PDM) that describe the rigid face shape (location, scale and rotation) and non-rigid face shape (deformation due to expression and identity). For more details, please refer to chapter 4.2 of my Tadas Baltrusaitis's PhD thesis [download link].

    • 638-643. p_scale, p_rx, p_ry, p_rz, p_tx, p_ty - Scale, rotation, and translation terms of the PDM.
    • 644-677. p_0, ..., p_33 - Non-rigid shape parameters.

    Columns 687-712 = Facial Action Units

    Facial Action Units (AUs) are a way to describe human facial movements (Ekman, Friesen, and Hager, 2002) [wiki link]. More information on OpenFace's implementation of AUs can be found here.

    • 678-694. AU01_r, AU02_r, AU04_r, AU05_r, AU06_r, AU07_r, AU09_r, AU10_r, AU12_r, AU14_r, AU15_r, AU17_r, AU20_r, AU23_r, AU25_r, AU26_r, AU45_r - Intensity of AU movement, range from 0 (no muscle contraction) to 5 (maximal muscle contraction).
    • 695-712. AU01_c, AU02_c, AU04_c, AU05_c, AU06_c, AU07_c, AU09_c, AU10_c, AU12_c, AU14_c, AU15_c, AU17_c, AU20_c, AU23_c, AU25_c, AU26_c, AU28_c, AU45_c - Presence or absence of 18 AUs, range 0 (absent, not detected) to 1 (present, detected).

    Note, OpenFace's columns 2 and 5 (face_id and success, respectively) were not included in this data set. These values were redundant as a single face was detected in all frames, in all 2452 trials.

    Tracking Overlay Videos

    Tracking overlay videos visualize most aspects of the tracking output described above.

    • Frame - Column 1, Top left corner of video
    • Eye Gaze - Columns 4-11. Indicated by green ray emanating from left and right eyes.
    • Eye region landmarks 2D - Columns 12-123. Red landmarks around left and right eyes, and black circles surrounding left and right irises.
    • Head pose - Columns 292-297. Blue bounding box surrounding the actor's head.
    • Facial landmarks 2D - Columns 298-433. Red landmarks on the participant's left and right eyebrows, nose, lips, and jaw.
    • Facial Action Unit Intensity - Columns 687-694. All 17 AUs are listed on the left side of the video in black text. Intensity level (0-5) of each AU is indicated by the numeric value and blue bar.
    • Facial Action Unit Presence - Columns 695-712. All 18 AUs are listed on the right side of the video in black & green text. Absence of an AU (0) is in black text with the numeric value 0.0. Presence of an AU (1) is in green text with the numeric value 1.0.

    Camera Parameters and 3D Calibration Procedure

    This data set contains accurate estimates of actors' 3D head poses. To produce these, camera parameters at the time of recording were required (distance from camera to actor, and camera field of view). These values were used with OpenCV's camera calibration procedure, described here, to produce estimates of the camera's focal length and optical center at the time of actor recordings. The four values produced by the calibration procedure (fx,fy,cx,cy) were input to OpenFace as command line arguments during facial tracking, described here, to produce accurate estimates of 3D head pose.

    Camera

  16. h

    gn-humor-detection

    • huggingface.co
    Updated Jan 27, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    gn-humor-detection [Dataset]. https://huggingface.co/datasets/mmaguero/gn-humor-detection
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 27, 2025
    Authors
    Marvin M. Agüero-Torales
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Text-based afective computing

    We collected a dataset of tweets primarily written in Guarani (and Jopara, a code-switching language that combines Guarani and Spanish) and annotated them for three widely-used dimensions in sentiment analysis:

    emotion recognition (https://huggingface.co/datasets/mmaguero/gn-emotion-recognition),
    humor detection (this repo, https://huggingface.co/datasets/mmaguero/gn-humor-detection), and offensive language identification… See the full description on the dataset page: https://huggingface.co/datasets/mmaguero/gn-humor-detection.

  17. m

    Emotion Recognition And Sentiment Analysis Market Size, Share & Trends...

    • marketresearchintellect.com
    Updated Oct 16, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Research Intellect® | Market Analysis and Research Reports (2020). Emotion Recognition And Sentiment Analysis Market Size, Share & Trends [2031] [Dataset]. https://www.marketresearchintellect.com/product/global-emotion-recognition-and-sentiment-analysis-market-size-forecast/
    Explore at:
    Dataset updated
    Oct 16, 2020
    Dataset authored and provided by
    Market Research Intellect® | Market Analysis and Research Reports
    License

    https://www.marketresearchintellect.com/privacy-policyhttps://www.marketresearchintellect.com/privacy-policy

    Area covered
    Global
    Description

    The market size of the Emotion Recognition And Sentiment Analysis Market is categorized based on Type (Customer Feedback, Social Media Monitoring, Market Research, Product Development) and Application (Sentiment Analysis Software, Emotion Detection Tools, Text Analysis, Voice Sentiment Analysis, Image Sentiment Analysis) and geographical regions (North America, Europe, Asia-Pacific, South America, and Middle-East and Africa).

    This report provides insights into the market size and forecasts the value of the market, expressed in USD million, across these defined segments.

  18. m

    Hate speech driven emotion recognition dataset for Bangla text

    • data.mendeley.com
    Updated Mar 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Md Masudur Rahman (2025). Hate speech driven emotion recognition dataset for Bangla text [Dataset]. http://doi.org/10.17632/2py5gb6xh7.1
    Explore at:
    Dataset updated
    Mar 10, 2025
    Authors
    Md Masudur Rahman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Hate speech detection and emotion recognition are emerging research fields for Natural Language Processing (NLP). The text that expresses hate speech also convey some emotion. Hence, hate speech and emotion recognition are related research areas. Therefore, attention is necessary to combine both fields in the same research. With this motivation, a multilevel dataset for natural Bangla text is proposed for Bangla hate speech and emotion recognition. The dataset contains 16407 statements collected from natural comments and categorized into two labels. Firstly, it is categorized as either hate or non-hate. Then, each category is further divided into three emotions: happy, sad, and angry. The dataset is validated with Cohen’s Kappa and Fliess’ Kappa measures that show good scores. The dataset can be applicable to machine learning and natural language processing tasks for Bangla text.

  19. h

    xed_en_fi

    • huggingface.co
    Updated May 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Language Technology Research Group at the University of Helsinki (2024). xed_en_fi [Dataset]. https://huggingface.co/datasets/Helsinki-NLP/xed_en_fi
    Explore at:
    Dataset updated
    May 13, 2024
    Dataset authored and provided by
    Language Technology Research Group at the University of Helsinki
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A multilingual fine-grained emotion dataset. The dataset consists of human annotated Finnish (25k) and English sentences (30k). Plutchik’s core emotions are used to annotate the dataset with the addition of neutral to create a multilabel multiclass dataset. The dataset is carefully evaluated using language-specific BERT models and SVMs to show that XED performs on par with other similar datasets and is therefore a useful tool for sentiment analysis and emotion detection.

  20. Emotional Video Data

    • kaggle.com
    Updated Jun 7, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Frank Wong (2024). Emotional Video Data [Dataset]. https://www.kaggle.com/datasets/nexdatafrank/emotional-video-data
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 7, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Frank Wong
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Emotional Video Data

    Description

    Emotional Video Data,including multiple races, multiple indoor scenes, multiple age groups, multiple languages, multiple emotions (11 types of facial emotions, 15 types of inner emotions). For each sentence in each video, annotated emotion types (including facial emotions and inner emotions), start & end timestamp, text transcription.This dataset can be used for tasks such as emotion recognition and sentiment analysis, enhancing model performance in real and complex tasks.rnQuality tested by various AI companies. We strictly adhere to data protection regulations and privacy standards, ensuring the maintenance of user privacy and legal rights throughout the data collection, storage, and usage processes, our datasets are all GDPR, CCPA, PIPL complied. For more details, please refer to the link: https://www.nexdata.ai/datasets/speechrecog/977?source=Kaggle

    Format

    The video data format is .mp4, the annotation file format is .json;

    Content category

    Including multiple races, multiple indoor scenes, multiple age groups, multiple languages, multiple emotions (11 types of facial emotions, 15 types of inner emotions);

    Recording condition

    Indoor;

    Recording device

    Camera or smartphone;

    Contributor

    1,003 people, each person has one or several videos with multiple emotions; Race distribution: 232 people of Asian, 614 people of Caucasian, 157 people of black race; gender distribution: 410 people of male, 593 people of female; age distribution: 569 young people, 330 middle-aged people and 104 old-aged people;

    Features of annotation

    For each sentence in each video, annotated emotion types (including facial emotions and inner emotions), start & end timestamp, text transcription;

    Accuracy Rate

    Collecting accuracy: according to the 'collecting requirement', the collecting accuracy is over 97%; Label annotation accuracy: the accuracy of language, race, gender and age group labeling is over 97%; file annotation accuracy: the word accuracy rate of text transcription is over 85%;

    Licensing Information

    Commercial License

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
EASIER Emotion Detection from Text [Dataset]. https://live.european-language-grid.eu/catalogue/tool-service/23050

EASIER Emotion Detection from Text

Explore at:
License

https://www.gnu.org/licenses/gpl-3.0-standalone.htmlhttps://www.gnu.org/licenses/gpl-3.0-standalone.html

Description

This repository contains data and code for emotion detection in text, developed in the EASIER project on sign language translation. The end goal within the project is to provide emotional cues to an avatar for the translation direction from spoken to sign language. While the EASIER project is multilingual, emotion detection so far is restricted to German.