Longformer: The Long-Document Transformer

Um modelo baseado na arquitetura Transformer escalável para processar documentos longos e que facilita a execução de uma ampla gama de tarefas de PNL em nível de documento sem fragmentar / encurtar a entrada longa e sem arquitetura complexa para combinar informações entre esses blocos.

Abstract

Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformer’s attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop and TriviaQA.

Links principais

Como obter um modelo Longformer?

RoBERTa --> Longformer : build a “long” version of pretrained models (convert_model_to_long.ipynb): esse caderno replica o procedimento descrito no artigo do Longformer para treinar um modelo do Longformer a partir de um modelo RoBERTa (nota: o mesmo procedimento pode ser aplicado para construir a versão “longa” de outros modelos pré-treinados também).

Tarefas de PLN com Longformer

1 Curtida