4 datasets found
  1. Data from: PAN19 Authorship Analysis: Cross-Domain Authorship Attribution

    • zenodo.org
    • repository.uantwerpen.be
    zip
    Updated Nov 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mike Kestemont; Efstathios Stamatatos; Enrique Manjavacas; Walter Daelemans; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Mike Kestemont; Efstathios Stamatatos; Enrique Manjavacas; Walter Daelemans (2023). PAN19 Authorship Analysis: Cross-Domain Authorship Attribution [Dataset]. http://doi.org/10.5281/zenodo.3530313
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 2, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Mike Kestemont; Efstathios Stamatatos; Enrique Manjavacas; Walter Daelemans; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Mike Kestemont; Efstathios Stamatatos; Enrique Manjavacas; Walter Daelemans
    Description

    Authorship attribution is an important problem in information retrieval and computational linguistics but also in applied areas such as law and journalism where knowing the author of a document (such as a ransom note) may enable e.g. law enforcement to save lives. The most common framework for testing candidate algorithms is the closed-set attribution task: given a sample of reference documents from a restricted and finite set of candidate authors, the task is to determine the most likely author of a previously unseen document of unknown authorship. This task may be quite challenging in cross-domain conditions, when documents of known and unknown authorship come from different domains (e.g., thematic area, genre). In addition, it is often more realistic to assume that the true author of a disputed document is not necessarily included in the list of candidates.

    Fanfiction refers to fictional forms of literature which are nowadays produced by admirers ('fans') of a certain author (e.g. J.K. Rowling), novel ('Pride and Prejudice'), TV series (Sherlock Holmes), etc. The fans heavily borrow from the original work's theme, atmosphere, style, characters, story world etc. to produce new fictional literature, i.e. the so-called fanfics. This is why fanfiction is also known as transformative literature and has generated a number of controversies in recent years related to the intellectual rights property of the original authors (cf. plagiarism). Fanfiction, however, is typically produced by fans without any explicit commercial goals. The publication of fanfics typically happens online, on informal community platforms that are dedicated to making such literature accessible to a wider audience (e.g. fanfiction.net). The original work of art or genre is typically refered to as a fandom.

    This edition of PAN focuses on cross-domain attribution in fanfiction, a task that can be more accurately described as cross-fandom attribution in fanfiction. In more detail, all documents of unknown authorship are fanfics of the same fandom (target fandom) while the documents of known authorship by the candidate authors are fanfics of several fandoms (other than the target-fandom). In contrast to the PAN-2018 edition of this task, we focus on open-set attribution conditions, namely the true author of a text in the target domain is not necessarily included in the list of candidate authors.

    Each problem consists of a set of known fanfics by each candidate author and a set of unknown fanfics located in separate folders. The file problem-info.json that can be found in the main folder of each problem, shows the name of folder of unknown documents and the list of names of candidate author folders.

    The fanfics of known authorship belong to several fandoms (excluding the target fandom). The file fandom-info.json (it can be found in the main folder of each problem) provides information about the fandom of all fanfics of known authorsihp, as follows.

    The true author of each unknown document can be seen in the file ground-truth.json, also found in the main folder of each problem. Note that all unknown documents that are not written by any of the candidate authors belong to the class.

    In addition, to handle a collection of such problems, the file collection-info.json includes all relevant information. In more detail, for each problem it lists its main folder, the language (either "en", "fr", "it", or "sp"), and the encoding (always UTF-8) of documents.

  2. PAN19 Authorship Analysis: Style Change Detection

    • zenodo.org
    Updated Aug 10, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eva Zangerle; Michael Tschuggnall; Günther Specht; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Eva Zangerle; Michael Tschuggnall; Günther Specht (2021). PAN19 Authorship Analysis: Style Change Detection [Dataset]. http://doi.org/10.5281/zenodo.5174825
    Explore at:
    Dataset updated
    Aug 10, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Eva Zangerle; Michael Tschuggnall; Günther Specht; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Eva Zangerle; Michael Tschuggnall; Günther Specht
    Description

    This is the data set for the Style Change Detection task of PAN@CLEF 2019.

    The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Detecting these positions is a crucial part of the authorship identification process, and for multi-author document analysis in general. Note that, for this task, we make the assumption that a change in writing style always signifies a change in author.

    Tasks

    Given a document, we ask participants to answer the following two questions:

    • Was the given document written by multiple authors? (task 1)
    • For each pair of consecutive paragraphs in the given document: is there a style change between these paragraphs? (task 2)

    In other words, the goal is to determine whether the given document contains style changes and if it indeed does, we aim to find the position of the change in the document (between paragraphs).

    All documents are provided in English and may contain zero up to ten style changes, resulting from at most three different authors. However, style changes may only occur between paragraphs (i.e., a single paragraph is always authored by a single author and does not contain any style changes).

    Data

    To develop and then test your algorithms, two data sets including ground truth information are provided. Those data sets differ in their topical breadth (i.e., the number of different topics that are covered in the documents contained). dataset-narrow contains texts from a relatively narrow set of subjects matters (all related to technology), whereas dataset-wide adds additional subject areas to that (travel, philosophy, economics, history, etc.).

    Both of those data sets are split into three parts:

    • training set: Contains 50% of the whole data set and includes ground truth data. Use this set to develop and train your models.
    • validation set: Contains 25% of the whole data set and includes ground truth data. Use this set to evaluate and optimize your models.
    • test set: Contains 25% of the whole data set. For the documents on the test set, you are not given ground truth data. This set is used for evaluation.


    Input Format

    Both dataset-narrow and dataset-wide are based on user posts from various sites of the StackExchange network, covering different topics. We refer to each input problem (i.e., the document for which to detect style changes) by an ID, which is subsequently also used to identify the submitted solution to this input problem.

    The structure of the provided datasets is as follows:

    train/
      dataset-narrow/
      dataset-wide/
    validation/
      dataset-narrow/
      dataset-wide/
    test/
      dataset-narrow/
      dataset-wide/

    For each problem instance X (i.e., each input document), two files are provided:

    problem-X.txt contains the actual text, where paragraphs are denoted by .
    truth-problem-X.json contains the ground truth, i.e., the correct solution in JSON format:

    {
      "authors": NUMBER_OF_AUTHORS,
      "structure": ORDER_OF_AUTHORS,
      "site": SOURCE_SITE,
      "multi-author": RESULT_TASK1,
      "changes": RESULT_ARRAY_TASK2
    }

    The result for task 1 (key "multi-author") is a binary value (1 if the document is multi-authored, 0 if the document is single-authored). The result for task 2 (key "changes") is represented as an array, holding a binary for each pair of consecutive paragraphs within the document (0 if there was no style change, 1 if there was a style change). If the document is single-authored, the solution to task 2 is an array filled with 0s. Furthermore, we provide the order of authors contained in the document (e.g., [A1, A2, A1] for a two-author document), the total number of authors and the Stackoverflow site the texts were extracted from (i.e., topic).

    An example of a multi-author document, where there was a style change between the third and fourth paragraph could look as follows (we only list the two relevant key/value pairs here):

    {
      "multi-author": 1,
      "changes": [0,0,1,...]
    }

    A single-author document would have the following form (again, only listing the two relevant key/value pairs):

    {
      "multi-author": 0,
      "changes": [0,0,0,...]
    }

  3. PAN19 Authorship Analysis: Style Change Detection

    • zenodo.org
    Updated Aug 10, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eva Zangerle; Michael Tschuggnall; Günther Specht; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Eva Zangerle; Michael Tschuggnall; Günther Specht (2021). PAN19 Authorship Analysis: Style Change Detection [Dataset]. http://doi.org/10.5281/zenodo.3530362
    Explore at:
    Dataset updated
    Aug 10, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Eva Zangerle; Michael Tschuggnall; Günther Specht; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Eva Zangerle; Michael Tschuggnall; Günther Specht
    Description

    Many approaches have been proposed recently to identify the author of a given document. Thereby, one fact is often silently assumed: i.e., that the given document is indeed written by only author. For a realistic author identification system it is therefore crucial to at first determine whether a document is single- or multiauthored.

    To this end, previous PAN editions aimed to analyze multi-authored documents. As it has been shown that it is a hard problem to reliably identify individual authors and their contribution within a single document (Author Diarization, 2016; Style Breach Detection, 2017), last year's task substantially relaxed the problem by asking only for binary decision (single- or multi-authored). Considering the promising results achieved by the submitted approaches (see the overview paper for details), we continue last year's task and additionally ask participants to predict the number of involved authors.

    Given a document, participants thus should apply intrinsic style analyses to hierarchically answer the following questions:

    1. Is the document written by one or more authors, i.e., do style changes exist or not?
    2. If it is multi-authored, how many authors have collaborated?

    All documents are provided in English and may contain zero up to arbitrarily many style changes, resulting from arbitrarily many authors.

    The training set: contains 50% of the whole dataset and includes solutions. Use this set to feed/train your models.

    Like last year, the whole data set is based on user posts from various sites of the StackExchange network, covering different topics and containing approximately 300 to 2000 tokens per document.

    For each problem instance X, two files are provided:

    • problem-X.txt contains the actual text
    • problem-X.truth contains the ground truth, i.e., the correct solution in JSON format:
    { "authors": number_of_authors, "structure": [author_segment_1, ..., author_segment_3], "switches": [ character_pos_switch_segment_1, ..., character_pos_switch_segment_n, ] }

    An example for a multi-author document could look as follows:

    { "authors": 4, "structure": ["A1", "A2", "A4", "A2", "A4", "A2", "A3", "A2", "A4"], "switches": [805, 1552, 2827, 3584, 4340, 5489, 7564, 8714] }

    whereas a single-author document would have exactly the following form:

    { "authors": 1, "structure": ["A1"], "switches": [] }

    Note that authors within the structure correspond only to the respective document, i.e., they are not the same over the whole dataset. For example, author A1 in document 1 is most likely not the same author as A1 in document 2 (it could be, but as there are hundreds of authors the chances are very small that this is the case). Further, please consider that the structure and the switches are provided only as additional resources for the development of your algorithms, i.e., they are not expected to be predicted.

    To tackle the problem, you can develop novel approaches, extend existing algorithms from last year's task or adapt approaches from related problems such as intrinsic plagiarism detection or text segmentation. You are also free to additionally evaluate your approaches on last year's training/validation/test dataset (for the number of authors use the corresponding meta data).

  4. PAN19 Author Profiling: Bots and Gender Profiling

    • zenodo.org
    Updated Apr 26, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Francisco Rangel; Paolo Rosso; Francisco Rangel; Paolo Rosso (2020). PAN19 Author Profiling: Bots and Gender Profiling [Dataset]. http://doi.org/10.5281/zenodo.3692340
    Explore at:
    Dataset updated
    Apr 26, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Francisco Rangel; Paolo Rosso; Francisco Rangel; Paolo Rosso
    Description

    Social media bots pose as humans to influence users with commercial, political or ideological purposes. For example, bots could artificially inflate the popularity of a product by promoting it and/or writing positive ratings, as well as undermine the reputation of competitive products through negative valuations. The threat is even greater when the purpose is political or ideological (see Brexit referendum or US Presidential elections). Fearing the effect of this influence, the German political parties have rejected the use of bots in their electoral campaign for the general elections. Furthermore, bots are commonly related to fake news spreading. Therefore, to approach the identification of bots from an author profiling perspective is of high importance from the point of view of marketing, forensics and security.

    After having addressed several aspects of author profiling in social media from 2013 to 2018 (age and gender, also together with personality, gender and language variety, and gender from a multimodality perspective), this year we aim at investigating whether the author of a Twitter feed is a bot or a human. Furthermore, in case of human, to profile the gender of the author.

    The uncompressed dataset consists in a folder per language (en, es). Each folder contains:

    • A XML file per author (Twitter user) with 100 tweets. The name of the XML file correspond to the unique author id.
    • A truth.txt file with the list of authors and the ground truth.
  5. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Mike Kestemont; Efstathios Stamatatos; Enrique Manjavacas; Walter Daelemans; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Mike Kestemont; Efstathios Stamatatos; Enrique Manjavacas; Walter Daelemans (2023). PAN19 Authorship Analysis: Cross-Domain Authorship Attribution [Dataset]. http://doi.org/10.5281/zenodo.3530313
Organization logo

Data from: PAN19 Authorship Analysis: Cross-Domain Authorship Attribution

Related Article
Explore at:
2 scholarly articles cite this dataset (View in Google Scholar)
zipAvailable download formats
Dataset updated
Nov 2, 2023
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Mike Kestemont; Efstathios Stamatatos; Enrique Manjavacas; Walter Daelemans; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Mike Kestemont; Efstathios Stamatatos; Enrique Manjavacas; Walter Daelemans
Description

Authorship attribution is an important problem in information retrieval and computational linguistics but also in applied areas such as law and journalism where knowing the author of a document (such as a ransom note) may enable e.g. law enforcement to save lives. The most common framework for testing candidate algorithms is the closed-set attribution task: given a sample of reference documents from a restricted and finite set of candidate authors, the task is to determine the most likely author of a previously unseen document of unknown authorship. This task may be quite challenging in cross-domain conditions, when documents of known and unknown authorship come from different domains (e.g., thematic area, genre). In addition, it is often more realistic to assume that the true author of a disputed document is not necessarily included in the list of candidates.

Fanfiction refers to fictional forms of literature which are nowadays produced by admirers ('fans') of a certain author (e.g. J.K. Rowling), novel ('Pride and Prejudice'), TV series (Sherlock Holmes), etc. The fans heavily borrow from the original work's theme, atmosphere, style, characters, story world etc. to produce new fictional literature, i.e. the so-called fanfics. This is why fanfiction is also known as transformative literature and has generated a number of controversies in recent years related to the intellectual rights property of the original authors (cf. plagiarism). Fanfiction, however, is typically produced by fans without any explicit commercial goals. The publication of fanfics typically happens online, on informal community platforms that are dedicated to making such literature accessible to a wider audience (e.g. fanfiction.net). The original work of art or genre is typically refered to as a fandom.

This edition of PAN focuses on cross-domain attribution in fanfiction, a task that can be more accurately described as cross-fandom attribution in fanfiction. In more detail, all documents of unknown authorship are fanfics of the same fandom (target fandom) while the documents of known authorship by the candidate authors are fanfics of several fandoms (other than the target-fandom). In contrast to the PAN-2018 edition of this task, we focus on open-set attribution conditions, namely the true author of a text in the target domain is not necessarily included in the list of candidate authors.

Each problem consists of a set of known fanfics by each candidate author and a set of unknown fanfics located in separate folders. The file problem-info.json that can be found in the main folder of each problem, shows the name of folder of unknown documents and the list of names of candidate author folders.

The fanfics of known authorship belong to several fandoms (excluding the target fandom). The file fandom-info.json (it can be found in the main folder of each problem) provides information about the fandom of all fanfics of known authorsihp, as follows.

The true author of each unknown document can be seen in the file ground-truth.json, also found in the main folder of each problem. Note that all unknown documents that are not written by any of the candidate authors belong to the class.

In addition, to handle a collection of such problems, the file collection-info.json includes all relevant information. In more detail, for each problem it lists its main folder, the language (either "en", "fr", "it", or "sp"), and the encoding (always UTF-8) of documents.

Search
Clear search
Close search
Google apps
Main menu