2 datasets found
  1. Data from: PAN20 Authorship Analysis: Authorship Verification

    • zenodo.org
    • explore.openaire.eu
    zip
    Updated Nov 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Janek Bevendorff; Mike Kestemont; Efstathios Stamatatos; Enrique Manjavacas; Martin Potthast; Benno Stein; Janek Bevendorff; Mike Kestemont; Efstathios Stamatatos; Enrique Manjavacas; Martin Potthast; Benno Stein (2023). PAN20 Authorship Analysis: Authorship Verification [Dataset]. http://doi.org/10.5281/zenodo.5106099
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 13, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Janek Bevendorff; Mike Kestemont; Efstathios Stamatatos; Enrique Manjavacas; Martin Potthast; Benno Stein; Janek Bevendorff; Mike Kestemont; Efstathios Stamatatos; Enrique Manjavacas; Martin Potthast; Benno Stein
    Description

    Task

    Authorship verification is the task of deciding whether two texts have been written by the same author based on comparing the texts' writing styles.

    In the coming three years at PAN 2020 to PAN 2022, we develop a new experimental setup that addresses three key questions in authorship verification that have not been studied at scale to date:

    • Year 1 (PAN 2020): Closed-set verficiation.
      Given a large training dataset comprising of known authors who have written about a given set of topics, the test dataset contains verification cases from a subset of the authors and topics found in the training data.

    • Year 2 (PAN 2021): Open-set verification.
      Given the training dataset of Year 1, the test dataset contains verification cases from previously unseen authors and topics.

    • Year 3 (PAN 2022): Suprise task.
      The task of the last year of this evaluation cycle (to be announced at a later time) will be designed with an eye on realism and practical application.

    This evaluation cycle on authorship verification provides for a renewed challenge of increasing difficulty within a large-scale evaluation. We invite you to plan ahead and participate in all three of these tasks.

    More information at: PAN @ CLEF 2020 - Authorship Verification

    Citing the Dataset

    If you use this dataset for your research, please be sure to cite the following paper:

    Sebastian Bischoff, Niklas Deckers, Marcel Schliebs, Ben Thies, Matthias Hagen, Efstathios Stamatatos, Benno Stein, and Martin Potthast. The Importance of Suppressing Domain Style in Authorship Analysis. CoRR, abs/2005.14714, May 2020.

    Bibtex:

    @Article{stein:2020k,
     author =       {Sebastian Bischoff and Niklas Deckers and Marcel Schliebs and Ben Thies and Matthias Hagen and Efstathios Stamatatos and Benno Stein and Martin Potthast},
     journal =       {CoRR},
     month =        may,
     title =        {{The Importance of Suppressing Domain Style in Authorship Analysis}},
     url =         {https://arxiv.org/abs/2005.14714},
     volume =       {abs/2005.14714},
     year =        2020
    }

  2. PAN20 Authorship Analysis: Style Change Detection

    • zenodo.org
    Updated Aug 10, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eva Zangerle; Eva Zangerle; Maximilian Mayerl; Maximilian Mayerl; Michael Tschuggnall; Günther Specht; Benno Stein; Benno Stein; Michael Tschuggnall; Günther Specht (2021). PAN20 Authorship Analysis: Style Change Detection [Dataset]. http://doi.org/10.5281/zenodo.5174953
    Explore at:
    Dataset updated
    Aug 10, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Eva Zangerle; Eva Zangerle; Maximilian Mayerl; Maximilian Mayerl; Michael Tschuggnall; Günther Specht; Benno Stein; Benno Stein; Michael Tschuggnall; Günther Specht
    Description

    This is the data set for the Style Change Detection task of PAN 2020.

    The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Detecting these positions is a crucial part of the authorship identification process, and for multi-author document analysis in general. Note that, for this task, we make the assumption that a change in writing style always signifies a change in author.

    Tasks

    Given a document, we ask participants to answer the following two questions:

    • Was the given document written by multiple authors? (task 1)
    • For each pair of consecutive paragraphs in the given document: is there a style change between these paragraphs? (task 2)

    In other words, the goal is to determine whether the given document contains style changes and if it indeed does, we aim to find the position of the change in the document (between paragraphs).

    All documents are provided in English and may contain zero up to ten style changes, resulting from at most three different authors. However, style changes may only occur between paragraphs (i.e., a single paragraph is always authored by a single author and does not contain any style changes).

    Data

    To develop and then test your algorithms, two data sets including ground truth information are provided. Those data sets differ in their topical breadth (i.e., the number of different topics that are covered in the documents contained). dataset-narrow contains texts from a relatively narrow set of subjects matters (all related to technology), whereas dataset-wide adds additional subject areas to that (travel, philosophy, economics, history, etc.).

    Both of those data sets are split into three parts:

    1. training set: Contains 50% of the whole data set and includes ground truth data. Use this set to develop and train your models.
    2. validation set: Contains 25% of the whole data set and includes ground truth data. Use this set to evaluate and optimize your models.
    3. test set: Contains 25% of the whole data set. For the documents on the test set, you are not given ground truth data. This set is used for evaluation (see later).

    Input Format

    Both dataset-narrow and dataset-wide are based on user posts from various sites of the StackExchange network, covering different topics. We refer to each input problem (i.e., the document for which to detect style changes) by an ID, which is subsequently also used to identify the submitted solution to this input problem.

    The structure of the provided datasets is as follows:

              
    train/
      dataset-narrow/
      dataset-wide/
    validation/
      dataset-narrow/
      dataset-wide/
    test/
      dataset-narrow/
      dataset-wide/
          

    For each problem instance X (i.e., each input document), two files are provided:

    1. problem-X.txt contains the actual text, where paragraphs are denoted by .
    2. truth-problem-X.json contains the ground truth, i.e., the correct solution in JSON format:
      {
        "authors": NUMBER_OF_AUTHORS,
        "structure": ORDER_OF_AUTHORS,
        "site": SOURCE_SITE,
        "multi-author": RESULT_TASK1,
        "changes": RESULT_ARRAY_TASK2
      }

      The result for task 1 (key "multi-author") is a binary value (1 if the document is multi-authored, 0 if the document is single-authored). The result for task 2 (key "changes") is represented as an array, holding a binary for each pair of consecutive paragraphs within the document (0 if there was no style change, 1 if there was a style change). If the document is single-authored, the solution to task 2 is an array filled with 0s. Furthermore, we provide the order of authors contained in the document (e.g., [A1, A2, A1] for a two-author document), the total number of authors and the Stackoverflow site the texts were extracted from (i.e., topic).

      An example of a multi-author document, where there was a style change between the third and fourth paragraph could look as follows (we only list the two relevant key/value pairs here):

      {
        "multi-author": 1,
        "changes": [0,0,1,...]
      }

      A single-author document would have the following form (again, only listing the two relevant key/value pairs):

      {
        "multi-author": 0,
        "changes": [0,0,0,...]
      }

  3. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Janek Bevendorff; Mike Kestemont; Efstathios Stamatatos; Enrique Manjavacas; Martin Potthast; Benno Stein; Janek Bevendorff; Mike Kestemont; Efstathios Stamatatos; Enrique Manjavacas; Martin Potthast; Benno Stein (2023). PAN20 Authorship Analysis: Authorship Verification [Dataset]. http://doi.org/10.5281/zenodo.5106099
Organization logo

Data from: PAN20 Authorship Analysis: Authorship Verification

Related Article
Explore at:
zipAvailable download formats
Dataset updated
Nov 13, 2023
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Janek Bevendorff; Mike Kestemont; Efstathios Stamatatos; Enrique Manjavacas; Martin Potthast; Benno Stein; Janek Bevendorff; Mike Kestemont; Efstathios Stamatatos; Enrique Manjavacas; Martin Potthast; Benno Stein
Description

Task

Authorship verification is the task of deciding whether two texts have been written by the same author based on comparing the texts' writing styles.

In the coming three years at PAN 2020 to PAN 2022, we develop a new experimental setup that addresses three key questions in authorship verification that have not been studied at scale to date:

  • Year 1 (PAN 2020): Closed-set verficiation.
    Given a large training dataset comprising of known authors who have written about a given set of topics, the test dataset contains verification cases from a subset of the authors and topics found in the training data.

  • Year 2 (PAN 2021): Open-set verification.
    Given the training dataset of Year 1, the test dataset contains verification cases from previously unseen authors and topics.

  • Year 3 (PAN 2022): Suprise task.
    The task of the last year of this evaluation cycle (to be announced at a later time) will be designed with an eye on realism and practical application.

This evaluation cycle on authorship verification provides for a renewed challenge of increasing difficulty within a large-scale evaluation. We invite you to plan ahead and participate in all three of these tasks.

More information at: PAN @ CLEF 2020 - Authorship Verification

Citing the Dataset

If you use this dataset for your research, please be sure to cite the following paper:

Sebastian Bischoff, Niklas Deckers, Marcel Schliebs, Ben Thies, Matthias Hagen, Efstathios Stamatatos, Benno Stein, and Martin Potthast. The Importance of Suppressing Domain Style in Authorship Analysis. CoRR, abs/2005.14714, May 2020.

Bibtex:

@Article{stein:2020k,
 author =       {Sebastian Bischoff and Niklas Deckers and Marcel Schliebs and Ben Thies and Matthias Hagen and Efstathios Stamatatos and Benno Stein and Martin Potthast},
 journal =       {CoRR},
 month =        may,
 title =        {{The Importance of Suppressing Domain Style in Authorship Analysis}},
 url =         {https://arxiv.org/abs/2005.14714},
 volume =       {abs/2005.14714},
 year =        2020
}

Search
Clear search
Close search
Google apps
Main menu