3 datasets found
  1. PAN19 Authorship Analysis: Style Change Detection

    • zenodo.org
    Updated Aug 10, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eva Zangerle; Michael Tschuggnall; Günther Specht; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Eva Zangerle; Michael Tschuggnall; Günther Specht (2021). PAN19 Authorship Analysis: Style Change Detection [Dataset]. http://doi.org/10.5281/zenodo.5174825
    Explore at:
    Dataset updated
    Aug 10, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Eva Zangerle; Michael Tschuggnall; Günther Specht; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Eva Zangerle; Michael Tschuggnall; Günther Specht
    Description

    This is the data set for the Style Change Detection task of PAN@CLEF 2019.

    The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Detecting these positions is a crucial part of the authorship identification process, and for multi-author document analysis in general. Note that, for this task, we make the assumption that a change in writing style always signifies a change in author.

    Tasks

    Given a document, we ask participants to answer the following two questions:

    • Was the given document written by multiple authors? (task 1)
    • For each pair of consecutive paragraphs in the given document: is there a style change between these paragraphs? (task 2)

    In other words, the goal is to determine whether the given document contains style changes and if it indeed does, we aim to find the position of the change in the document (between paragraphs).

    All documents are provided in English and may contain zero up to ten style changes, resulting from at most three different authors. However, style changes may only occur between paragraphs (i.e., a single paragraph is always authored by a single author and does not contain any style changes).

    Data

    To develop and then test your algorithms, two data sets including ground truth information are provided. Those data sets differ in their topical breadth (i.e., the number of different topics that are covered in the documents contained). dataset-narrow contains texts from a relatively narrow set of subjects matters (all related to technology), whereas dataset-wide adds additional subject areas to that (travel, philosophy, economics, history, etc.).

    Both of those data sets are split into three parts:

    • training set: Contains 50% of the whole data set and includes ground truth data. Use this set to develop and train your models.
    • validation set: Contains 25% of the whole data set and includes ground truth data. Use this set to evaluate and optimize your models.
    • test set: Contains 25% of the whole data set. For the documents on the test set, you are not given ground truth data. This set is used for evaluation.


    Input Format

    Both dataset-narrow and dataset-wide are based on user posts from various sites of the StackExchange network, covering different topics. We refer to each input problem (i.e., the document for which to detect style changes) by an ID, which is subsequently also used to identify the submitted solution to this input problem.

    The structure of the provided datasets is as follows:

    train/
      dataset-narrow/
      dataset-wide/
    validation/
      dataset-narrow/
      dataset-wide/
    test/
      dataset-narrow/
      dataset-wide/

    For each problem instance X (i.e., each input document), two files are provided:

    problem-X.txt contains the actual text, where paragraphs are denoted by .
    truth-problem-X.json contains the ground truth, i.e., the correct solution in JSON format:

    {
      "authors": NUMBER_OF_AUTHORS,
      "structure": ORDER_OF_AUTHORS,
      "site": SOURCE_SITE,
      "multi-author": RESULT_TASK1,
      "changes": RESULT_ARRAY_TASK2
    }

    The result for task 1 (key "multi-author") is a binary value (1 if the document is multi-authored, 0 if the document is single-authored). The result for task 2 (key "changes") is represented as an array, holding a binary for each pair of consecutive paragraphs within the document (0 if there was no style change, 1 if there was a style change). If the document is single-authored, the solution to task 2 is an array filled with 0s. Furthermore, we provide the order of authors contained in the document (e.g., [A1, A2, A1] for a two-author document), the total number of authors and the Stackoverflow site the texts were extracted from (i.e., topic).

    An example of a multi-author document, where there was a style change between the third and fourth paragraph could look as follows (we only list the two relevant key/value pairs here):

    {
      "multi-author": 1,
      "changes": [0,0,1,...]
    }

    A single-author document would have the following form (again, only listing the two relevant key/value pairs):

    {
      "multi-author": 0,
      "changes": [0,0,0,...]
    }

  2. PAN19 Authorship Analysis: Style Change Detection

    • zenodo.org
    Updated Aug 10, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eva Zangerle; Michael Tschuggnall; Günther Specht; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Eva Zangerle; Michael Tschuggnall; Günther Specht (2021). PAN19 Authorship Analysis: Style Change Detection [Dataset]. http://doi.org/10.5281/zenodo.3530362
    Explore at:
    Dataset updated
    Aug 10, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Eva Zangerle; Michael Tschuggnall; Günther Specht; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Eva Zangerle; Michael Tschuggnall; Günther Specht
    Description

    Many approaches have been proposed recently to identify the author of a given document. Thereby, one fact is often silently assumed: i.e., that the given document is indeed written by only author. For a realistic author identification system it is therefore crucial to at first determine whether a document is single- or multiauthored.

    To this end, previous PAN editions aimed to analyze multi-authored documents. As it has been shown that it is a hard problem to reliably identify individual authors and their contribution within a single document (Author Diarization, 2016; Style Breach Detection, 2017), last year's task substantially relaxed the problem by asking only for binary decision (single- or multi-authored). Considering the promising results achieved by the submitted approaches (see the overview paper for details), we continue last year's task and additionally ask participants to predict the number of involved authors.

    Given a document, participants thus should apply intrinsic style analyses to hierarchically answer the following questions:

    1. Is the document written by one or more authors, i.e., do style changes exist or not?
    2. If it is multi-authored, how many authors have collaborated?

    All documents are provided in English and may contain zero up to arbitrarily many style changes, resulting from arbitrarily many authors.

    The training set: contains 50% of the whole dataset and includes solutions. Use this set to feed/train your models.

    Like last year, the whole data set is based on user posts from various sites of the StackExchange network, covering different topics and containing approximately 300 to 2000 tokens per document.

    For each problem instance X, two files are provided:

    • problem-X.txt contains the actual text
    • problem-X.truth contains the ground truth, i.e., the correct solution in JSON format:
    { "authors": number_of_authors, "structure": [author_segment_1, ..., author_segment_3], "switches": [ character_pos_switch_segment_1, ..., character_pos_switch_segment_n, ] }

    An example for a multi-author document could look as follows:

    { "authors": 4, "structure": ["A1", "A2", "A4", "A2", "A4", "A2", "A3", "A2", "A4"], "switches": [805, 1552, 2827, 3584, 4340, 5489, 7564, 8714] }

    whereas a single-author document would have exactly the following form:

    { "authors": 1, "structure": ["A1"], "switches": [] }

    Note that authors within the structure correspond only to the respective document, i.e., they are not the same over the whole dataset. For example, author A1 in document 1 is most likely not the same author as A1 in document 2 (it could be, but as there are hundreds of authors the chances are very small that this is the case). Further, please consider that the structure and the switches are provided only as additional resources for the development of your algorithms, i.e., they are not expected to be predicted.

    To tackle the problem, you can develop novel approaches, extend existing algorithms from last year's task or adapt approaches from related problems such as intrinsic plagiarism detection or text segmentation. You are also free to additionally evaluate your approaches on last year's training/validation/test dataset (for the number of authors use the corresponding meta data).

  3. PAN22 Authorship Analysis: Style Change Detection

    • zenodo.org
    zip
    Updated Dec 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eva Zangerle; Eva Zangerle; Maximilian Mayerl; Maximilian Mayerl; Michael Tschuggnall; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Michael Tschuggnall (2023). PAN22 Authorship Analysis: Style Change Detection [Dataset]. http://doi.org/10.5281/zenodo.6334245
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 6, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Eva Zangerle; Eva Zangerle; Maximilian Mayerl; Maximilian Mayerl; Michael Tschuggnall; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Michael Tschuggnall
    Description

    This is the dataset for the Style Change Detection task of PAN 2022.

    Task

    The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Hence, a fundamental question is the following: If multiple authors have written a text together, can we find evidence for this fact; i.e., do we have a means to detect variations in the writing style? Answering this question belongs to the most difficult and most interesting challenges in author identification: Style change detection is the only means to detect plagiarism in a document if no comparison texts are given; likewise, style change detection can help to uncover gift authorships, to verify a claimed authorship, or to develop new technology for writing support.

    Previous editions of the Style Change Detection task aim at e.g., detecting whether a document is single- or multi-authored (2018), the actual number of authors within a document (2019), whether there was a style change between two consecutive paragraphs (2020, 2021) and where the actual style changes were located (2021). Based on the progress made towards this goal in previous years, we again extend the set of challenges to likewise entice novices and experts:

    Given a document, we ask participants to solve the following three tasks:

    • [Task1] Style Change Basic: for a text written by two authors that contains a single style change only, find the position of this change (i.e., cut the text into the two authors’ texts on the paragraph-level),
    • [Task2] Style Change Advanced: for a text written by two or more authors, find all positions of writing style change (i.e., assign all paragraphs of the text uniquely to some author out of the number of authors assumed for the multi-author document)
    • [Task3] Style Change Real-World: for a text written by two or more authors, find all positions of writing style change, where style changes now not only occur between paragraphs, but at the sentence level.

    All documents are provided in English and may contain an arbitrary number of style changes, resulting from at most five different authors.

    Data

    To develop and then test your algorithms, three datasets including ground truth information are provided (dataset1 for task 1, dataset2 for task 2, and dataset3 for task 3).

    Each dataset is split into three parts:

    1. training set: Contains 70% of the whole dataset and includes ground truth data. Use this set to develop and train your models.
    2. validation set: Contains 15% of the whole dataset and includes ground truth data. Use this set to evaluate and optimize your models.
    3. test set: Contains 15% of the whole dataset, no ground truth data is given. This set is used for evaluation (see later).

    You are free to use additional external data for training your models. However, we ask you to make the additional data utilized freely available under a suitable license.

    Input Format

    The datasets are based on user posts from various sites of the StackExchange network, covering different topics. We refer to each input problem (i.e., the document for which to detect style changes) by an ID, which is subsequently also used to identify the submitted solution to this input problem. We provide one folder for train, validation, and test data for each dataset, respectively.

    For each problem instance X (i.e., each input document), two files are provided:

    1. problem-X.txt contains the actual text, where paragraphs are denoted by for tasks 1 and 2. For task 3, we provide one sentence per paragraph (again, split by ).
    2. truth-problem-X.json contains the ground truth, i.e., the correct solution in JSON format. An example file is listed in the following (note that we list keys for the three tasks here):
      {
      "authors": NUMBER_OF_AUTHORS,
      "site": SOURCE_SITE,
      "changes": RESULT_ARRAY_TASK1 or RESULT_ARRAY_TASK3,
      "paragraph-authors": RESULT_ARRAY_TASK2
      }

      The result for task 1 (key "changes") is represented as an array, holding a binary for each pair of consecutive paragraphs within the document (0 if there was no style change, 1 if there was a style change). For task 2 (key "paragraph-authors"), the result is the order of authors contained in the document (e.g., [1, 2, 1] for a two-author document), where the first author is "1", the second author appearing in the document is referred to as "2", etc. Furthermore, we provide the total number of authors and the Stackoverflow site the texts were extracted from (i.e., topic). The result for task 3 (key "changes") is similarly structured as the results array for task 1. However, for task 3, the changes array holds a binary for each pair of consecutive sentences and they may be multiple style changes in the document.

      An example of a multi-author document with a style change between the third and fourth paragraph (or sentence for task 3) could be described as follows (we only list the relevant key/value pairs here):

      {
      "changes": [0,0,1,...],
      "paragraph-authors": [1,1,1,2,...]
      }

    Output Format

    To evaluate the solutions for the tasks, the results have to be stored in a single file for each of the input documents and each of the datasets. Please note that we require a solution file to be generated for each input problem for each dataset. The data structure during the evaluation phase will be similar to that in the training phase, with the exception that the ground truth files are missing.

    For each given problem problem-X.txt, your software should output the missing solution file solution-problem-X.json, containing a JSON object holding the solution to the respective task. The solution for tasks 1 and 3 is an array containing a binary value for each pair of consecutive paragraphs (task 1) or sentences (task 3). For task 2, the solution is an array containing the order of authors contained in the document (as in the truth files).

    An example solution file for tasks 1 and 3 is featured in the following (note again that for task 1, changes are captured on the paragraph level, whereas for task 3, changes are captured on the sentence level):

    {
    "changes": [0,0,1,0,0,...]
    }

    For task 2, the solution file looks as follows:

    {
    "paragraph-authors": [1,1,2,2,3,2,...]
    }

  4. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Eva Zangerle; Michael Tschuggnall; Günther Specht; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Eva Zangerle; Michael Tschuggnall; Günther Specht (2021). PAN19 Authorship Analysis: Style Change Detection [Dataset]. http://doi.org/10.5281/zenodo.5174825
Organization logo

PAN19 Authorship Analysis: Style Change Detection

Explore at:
Dataset updated
Aug 10, 2021
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Eva Zangerle; Michael Tschuggnall; Günther Specht; Martin Potthast; Martin Potthast; Benno Stein; Benno Stein; Eva Zangerle; Michael Tschuggnall; Günther Specht
Description

This is the data set for the Style Change Detection task of PAN@CLEF 2019.

The goal of the style change detection task is to identify text positions within a given multi-author document at which the author switches. Detecting these positions is a crucial part of the authorship identification process, and for multi-author document analysis in general. Note that, for this task, we make the assumption that a change in writing style always signifies a change in author.

Tasks

Given a document, we ask participants to answer the following two questions:

  • Was the given document written by multiple authors? (task 1)
  • For each pair of consecutive paragraphs in the given document: is there a style change between these paragraphs? (task 2)

In other words, the goal is to determine whether the given document contains style changes and if it indeed does, we aim to find the position of the change in the document (between paragraphs).

All documents are provided in English and may contain zero up to ten style changes, resulting from at most three different authors. However, style changes may only occur between paragraphs (i.e., a single paragraph is always authored by a single author and does not contain any style changes).

Data

To develop and then test your algorithms, two data sets including ground truth information are provided. Those data sets differ in their topical breadth (i.e., the number of different topics that are covered in the documents contained). dataset-narrow contains texts from a relatively narrow set of subjects matters (all related to technology), whereas dataset-wide adds additional subject areas to that (travel, philosophy, economics, history, etc.).

Both of those data sets are split into three parts:

  • training set: Contains 50% of the whole data set and includes ground truth data. Use this set to develop and train your models.
  • validation set: Contains 25% of the whole data set and includes ground truth data. Use this set to evaluate and optimize your models.
  • test set: Contains 25% of the whole data set. For the documents on the test set, you are not given ground truth data. This set is used for evaluation.


Input Format

Both dataset-narrow and dataset-wide are based on user posts from various sites of the StackExchange network, covering different topics. We refer to each input problem (i.e., the document for which to detect style changes) by an ID, which is subsequently also used to identify the submitted solution to this input problem.

The structure of the provided datasets is as follows:

train/
  dataset-narrow/
  dataset-wide/
validation/
  dataset-narrow/
  dataset-wide/
test/
  dataset-narrow/
  dataset-wide/

For each problem instance X (i.e., each input document), two files are provided:

problem-X.txt contains the actual text, where paragraphs are denoted by .
truth-problem-X.json contains the ground truth, i.e., the correct solution in JSON format:

{
  "authors": NUMBER_OF_AUTHORS,
  "structure": ORDER_OF_AUTHORS,
  "site": SOURCE_SITE,
  "multi-author": RESULT_TASK1,
  "changes": RESULT_ARRAY_TASK2
}

The result for task 1 (key "multi-author") is a binary value (1 if the document is multi-authored, 0 if the document is single-authored). The result for task 2 (key "changes") is represented as an array, holding a binary for each pair of consecutive paragraphs within the document (0 if there was no style change, 1 if there was a style change). If the document is single-authored, the solution to task 2 is an array filled with 0s. Furthermore, we provide the order of authors contained in the document (e.g., [A1, A2, A1] for a two-author document), the total number of authors and the Stackoverflow site the texts were extracted from (i.e., topic).

An example of a multi-author document, where there was a style change between the third and fourth paragraph could look as follows (we only list the two relevant key/value pairs here):

{
  "multi-author": 1,
  "changes": [0,0,1,...]
}

A single-author document would have the following form (again, only listing the two relevant key/value pairs):

{
  "multi-author": 0,
  "changes": [0,0,0,...]
}

Search
Clear search
Close search
Google apps
Main menu