Search
Clear search
Close search
Main menu
Google apps
4 datasets found
  1. Z

    PAN22 Authorship Analysis: Authorship Verification

    • data.niaid.nih.gov
    Updated Nov 30, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stamatatos, Efstathios (2022). PAN22 Authorship Analysis: Authorship Verification [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6337136
    Explore at:
    Dataset updated
    Nov 30, 2022
    Dataset provided by
    Kestemont, Mike
    Pezik, Piotr
    Stamatatos, Efstathios
    Stein, Benno
    Potthast, Martin
    Kredens, Krzysztof
    Heini, Annina
    Bevendorff, Janek
    Description

    Download

    Access to our corpus can be requested via the Aston Institute for Forensic Linguistics Databank: https://fold.aston.ac.uk/handle/123456789/17

    Task

    Authorship verification is the task of deciding whether two texts have been written by the same author based on comparing the texts' writing styles. In previous editions of PAN, we explored the effectiveness of authorship verification technology in several languages and text genres. In the two most recent editions, cross-domain authorship verification using fanfiction texts was examined. Despite certain differences between fandoms, the task of cross-fandom authorship verification has proved to be relatively feasible. In the current edition, we focus on more challenging scenarios where each author verification case considers two texts that belong to different DTs (cross-DT authorship verification). This will allow us to study the ability of stylometric approaches to capture authorial characteristics that remain stable across DTs even when very different forms of expression are imposed by the DT norms.

    Based on a new corpus in English, we provide cross-DT authorship verification cases using the following DTs:

    Essays

    Emails

    Text messages

    Business memos

    The corpus comprises texts of around 100 individuals. All individuals have similar age (18-22) and are native English speakers. The topic of text samples is not restricted while the level of formality can vary within a certain DT (e.g., text messages may be addressed to family members or non-familial acquaintances).

    More information at: Authorship Verification 2022

  2. PAN22 Authorship Analysis: Authorship Verification

    • zenodo.org
    Updated Nov 30, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Efstathios Stamatatos; Efstathios Stamatatos; Krzysztof Kredens; Piotr Pezik; Annina Heini; Mike Kestemont; Mike Kestemont; Janek Bevendorff; Janek Bevendorff; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Krzysztof Kredens; Piotr Pezik; Annina Heini (2022). PAN22 Authorship Analysis: Authorship Verification [Dataset]. http://doi.org/10.5281/zenodo.6337137
    Explore at:
    Dataset updated
    Nov 30, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Efstathios Stamatatos; Efstathios Stamatatos; Krzysztof Kredens; Piotr Pezik; Annina Heini; Mike Kestemont; Mike Kestemont; Janek Bevendorff; Janek Bevendorff; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Krzysztof Kredens; Piotr Pezik; Annina Heini
    Description

    Task

    Authorship verification is the task of deciding whether two texts have been written by the same author based on comparing the texts' writing styles. In previous editions of PAN, we explored the effectiveness of authorship verification technology in several languages and text genres. In the two most recent editions, cross-domain authorship verification using fanfiction texts was examined. Despite certain differences between fandoms, the task of cross-fandom authorship verification has proved to be relatively feasible. In the current edition, we focus on more challenging scenarios where each author verification case considers two texts that belong to different DTs (cross-DT authorship verification). This will allow us to study the ability of stylometric approaches to capture authorial characteristics that remain stable across DTs even when very different forms of expression are imposed by the DT norms.

    Based on a new corpus in English, we provide cross-DT authorship verification cases using the following DTs:

    • Essays
    • Emails
    • Text messages
    • Business memos

    The corpus comprises texts of around 100 individuals. All individuals have similar age (18-22) and are native English speakers. The topic of text samples is not restricted while the level of formality can vary within a certain DT (e.g., text messages may be addressed to family members or non-familial acquaintances).

    More information at: Authorship Verification 2022

  3. PAN22 Authorship Analysis: Style Change Detection

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Dec 6, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eva Zangerle; Eva Zangerle; Maximilian Mayerl; Maximilian Mayerl; Michael Tschuggnall; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Michael Tschuggnall (2023). PAN22 Authorship Analysis: Style Change Detection [Dataset]. http://doi.org/10.5281/zenodo.6334245
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 6, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Eva Zangerle; Eva Zangerle; Maximilian Mayerl; Maximilian Mayerl; Michael Tschuggnall; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Michael Tschuggnall
    Description

    This is the dataset for the Style Change Detection task of PAN 2022.

    Task

    The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Hence, a fundamental question is the following: If multiple authors have written a text together, can we find evidence for this fact; i.e., do we have a means to detect variations in the writing style? Answering this question belongs to the most difficult and most interesting challenges in author identification: Style change detection is the only means to detect plagiarism in a document if no comparison texts are given; likewise, style change detection can help to uncover gift authorships, to verify a claimed authorship, or to develop new technology for writing support.

    Previous editions of the Style Change Detection task aim at e.g., detecting whether a document is single- or multi-authored (2018), the actual number of authors within a document (2019), whether there was a style change between two consecutive paragraphs (2020, 2021) and where the actual style changes were located (2021). Based on the progress made towards this goal in previous years, we again extend the set of challenges to likewise entice novices and experts:

    Given a document, we ask participants to solve the following three tasks:

    • [Task1] Style Change Basic: for a text written by two authors that contains a single style change only, find the position of this change (i.e., cut the text into the two authors’ texts on the paragraph-level),
    • [Task2] Style Change Advanced: for a text written by two or more authors, find all positions of writing style change (i.e., assign all paragraphs of the text uniquely to some author out of the number of authors assumed for the multi-author document)
    • [Task3] Style Change Real-World: for a text written by two or more authors, find all positions of writing style change, where style changes now not only occur between paragraphs, but at the sentence level.

    All documents are provided in English and may contain an arbitrary number of style changes, resulting from at most five different authors.

    Data

    To develop and then test your algorithms, three datasets including ground truth information are provided (dataset1 for task 1, dataset2 for task 2, and dataset3 for task 3).

    Each dataset is split into three parts:

    1. training set: Contains 70% of the whole dataset and includes ground truth data. Use this set to develop and train your models.
    2. validation set: Contains 15% of the whole dataset and includes ground truth data. Use this set to evaluate and optimize your models.
    3. test set: Contains 15% of the whole dataset, no ground truth data is given. This set is used for evaluation (see later).

    You are free to use additional external data for training your models. However, we ask you to make the additional data utilized freely available under a suitable license.

    Input Format

    The datasets are based on user posts from various sites of the StackExchange network, covering different topics. We refer to each input problem (i.e., the document for which to detect style changes) by an ID, which is subsequently also used to identify the submitted solution to this input problem. We provide one folder for train, validation, and test data for each dataset, respectively.

    For each problem instance X (i.e., each input document), two files are provided:

    1. problem-X.txt contains the actual text, where paragraphs are denoted by for tasks 1 and 2. For task 3, we provide one sentence per paragraph (again, split by ).
    2. truth-problem-X.json contains the ground truth, i.e., the correct solution in JSON format. An example file is listed in the following (note that we list keys for the three tasks here):
      {
      "authors": NUMBER_OF_AUTHORS,
      "site": SOURCE_SITE,
      "changes": RESULT_ARRAY_TASK1 or RESULT_ARRAY_TASK3,
      "paragraph-authors": RESULT_ARRAY_TASK2
      }

      The result for task 1 (key "changes") is represented as an array, holding a binary for each pair of consecutive paragraphs within the document (0 if there was no style change, 1 if there was a style change). For task 2 (key "paragraph-authors"), the result is the order of authors contained in the document (e.g., [1, 2, 1] for a two-author document), where the first author is "1", the second author appearing in the document is referred to as "2", etc. Furthermore, we provide the total number of authors and the Stackoverflow site the texts were extracted from (i.e., topic). The result for task 3 (key "changes") is similarly structured as the results array for task 1. However, for task 3, the changes array holds a binary for each pair of consecutive sentences and they may be multiple style changes in the document.

      An example of a multi-author document with a style change between the third and fourth paragraph (or sentence for task 3) could be described as follows (we only list the relevant key/value pairs here):

      {
      "changes": [0,0,1,...],
      "paragraph-authors": [1,1,1,2,...]
      }

    Output Format

    To evaluate the solutions for the tasks, the results have to be stored in a single file for each of the input documents and each of the datasets. Please note that we require a solution file to be generated for each input problem for each dataset. The data structure during the evaluation phase will be similar to that in the training phase, with the exception that the ground truth files are missing.

    For each given problem problem-X.txt, your software should output the missing solution file solution-problem-X.json, containing a JSON object holding the solution to the respective task. The solution for tasks 1 and 3 is an array containing a binary value for each pair of consecutive paragraphs (task 1) or sentences (task 3). For task 2, the solution is an array containing the order of authors contained in the document (as in the truth files).

    An example solution file for tasks 1 and 3 is featured in the following (note again that for task 1, changes are captured on the paragraph level, whereas for task 3, changes are captured on the sentence level):

    {
    "changes": [0,0,1,0,0,...]
    }

    For task 2, the solution file looks as follows:

    {
    "paragraph-authors": [1,1,2,2,3,2,...]
    }

  4. PAN25 Multi-Author Writing Style Analysis

    • zenodo.org
    zip
    Updated Feb 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eva Zangerle; Eva Zangerle; Maximilian Mayerl; Maximilian Mayerl; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein (2025). PAN25 Multi-Author Writing Style Analysis [Dataset]. http://doi.org/10.5281/zenodo.14891299
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 19, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Eva Zangerle; Eva Zangerle; Maximilian Mayerl; Maximilian Mayerl; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is the dataset for the shared task on Multi-Author Writing Style Analysis PAN@CLEF2025. Please consult the task's page for further details on the format, the dataset's creation, and links to baselines and utility code.

    Task

    The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Hence, a fundamental question is the following: If multiple authors together have written a text, can we find evidence for this fact; do we have a means to detect variations in the writing style? Answering this question belongs to the most difficult and most interesting challenges in author identification: Style change detection is the only means to detect plagiarism in a document if no comparison texts are given; likewise, style change detection can help to uncover gift authorships, to verify a claimed authorship, or to develop new technology for writing support.

    Previous editions of the multi-author writing style analysis task aim at e.g., detecting whether a document is single- or multi-authored (2018), the actual number of authors within a document (2019), whether there was a style change between two consecutive paragraphs (2020, 2021, 2022), and where the actual style changes were located (2021, 2022). In 2022, style changes also had to be detected on the sentence level. The previously used datasets exhibited high topic diversity, which allowed the participants to leverage topic information as a style change signal. In the 2023 and 2024 editions of the writing style analysis task, special attention is paid to this issue.

    We ask participants to solve the following intrinsic style change detection task: for a given text, find all positions of writing style change on the sentence-level (i.e., for each pair of consecutive sentences, assess whether there was a style change). The simultaneous change of authorship and topic will be carefully controlled and we will provide participants with datasets of three difficulty levels:

    1. Easy: The sentences of a document cover a variety of topics, allowing approaches to make use of topic information to detect authorship changes.
    2. Medium: The topical variety in a document is small (though still present) forcing the approaches to focus more on style to effectively solve the detection task.
    3. Hard: All sentences in a document are on the same topic.

    All documents are provided in English and may contain an arbitrary number of style changes. However, style changes may only occur between sentences (i.e., a single sentence is always authored by a single author and contains no style changes).

    Data

    To develop and then test your algorithms, three datasets including ground truth information are provided (easy for the easy task, medium for the medium task, and hard for the hard task).

    Each dataset is split into three parts:

    1. training set: Contains 70% of the whole dataset and includes ground truth data. Use this set to develop and train your models.
    2. validation set: Contains 15% of the whole dataset and includes ground truth data. Use this set to evaluate and optimize your models.
    3. test set: Contains 15% of the whole dataset, no ground truth data is given. This set is used for evaluation.

    You are free to use additional external data for training your models. However, we ask you to make the additional data utilized freely available under a suitable license.

  5. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Stamatatos, Efstathios (2022). PAN22 Authorship Analysis: Authorship Verification [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6337136

PAN22 Authorship Analysis: Authorship Verification

Explore at:
Dataset updated
Nov 30, 2022
Dataset provided by
Kestemont, Mike
Pezik, Piotr
Stamatatos, Efstathios
Stein, Benno
Potthast, Martin
Kredens, Krzysztof
Heini, Annina
Bevendorff, Janek
Description

Download

Access to our corpus can be requested via the Aston Institute for Forensic Linguistics Databank: https://fold.aston.ac.uk/handle/123456789/17

Task

Authorship verification is the task of deciding whether two texts have been written by the same author based on comparing the texts' writing styles. In previous editions of PAN, we explored the effectiveness of authorship verification technology in several languages and text genres. In the two most recent editions, cross-domain authorship verification using fanfiction texts was examined. Despite certain differences between fandoms, the task of cross-fandom authorship verification has proved to be relatively feasible. In the current edition, we focus on more challenging scenarios where each author verification case considers two texts that belong to different DTs (cross-DT authorship verification). This will allow us to study the ability of stylometric approaches to capture authorial characteristics that remain stable across DTs even when very different forms of expression are imposed by the DT norms.

Based on a new corpus in English, we provide cross-DT authorship verification cases using the following DTs:

Essays

Emails

Text messages

Business memos

The corpus comprises texts of around 100 individuals. All individuals have similar age (18-22) and are native English speakers. The topic of text samples is not restricted while the level of formality can vary within a certain DT (e.g., text messages may be addressed to family members or non-familial acquaintances).

More information at: Authorship Verification 2022