Skip to content

LayoutLM

This model was released on 2019-12-31 and added to Hugging Face Transformers on 2020-11-16.

PyTorch

LayoutLM jointly learns text and the document layout rather than focusing only on text. It incorporates positional layout information and visual features of words from the document images.

You can find all the original LayoutLM checkpoints under the LayoutLM collection.

The example below demonstrates question answering with the AutoModel class.

import torch
from datasets import load_dataset
from transformers import AutoTokenizer, LayoutLMForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("impira/layoutlm-document-qa", add_prefix_space=True)
model = LayoutLMForQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", dtype=torch.float16)
dataset = load_dataset("nielsr/funsd", split="train")
example = dataset[0]
question = "what's his name?"
words = example["words"]
boxes = example["bboxes"]
encoding = tokenizer(
question.split(),
words,
is_split_into_words=True,
return_token_type_ids=True,
return_tensors="pt"
)
bbox = []
for i, s, w in zip(encoding.input_ids[0], encoding.sequence_ids(0), encoding.word_ids(0)):
if s == 1:
bbox.append(boxes[w])
elif i == tokenizer.sep_token_id:
bbox.append([1000] * 4)
else:
bbox.append([0] * 4)
encoding["bbox"] = torch.tensor([bbox])
word_ids = encoding.word_ids(0)
outputs = model(**encoding)
loss = outputs.loss
start_scores = outputs.start_logits
end_scores = outputs.end_logits
start, end = word_ids[start_scores.argmax(-1)], word_ids[end_scores.argmax(-1)]
print(" ".join(words[start : end + 1]))
  • The original LayoutLM was not designed with a unified processing workflow. Instead, it expects preprocessed text (words) and bounding boxes (boxes) from an external OCR engine (like Pytesseract) and provide them as additional inputs to the tokenizer.

  • The forward method expects the input bbox (bounding boxes of the input tokens). Each bounding box should be in the format (x0, y0, x1, y1). (x0, y0) corresponds to the upper left corner of the bounding box and {x1, y1) corresponds to the lower right corner. The bounding boxes need to be normalized on a 0-1000 scale as shown below.

def normalize_bbox(bbox, width, height):
return [
int(1000 * (bbox[0] / width)),
int(1000 * (bbox[1] / height)),
int(1000 * (bbox[2] / width)),
int(1000 * (bbox[3] / height)),
]
  • width and height correspond to the width and height of the original document in which the token occurs. These values can be obtained as shown below.
from PIL import Image
# Document can be a png, jpg, etc. PDFs must be converted to images.
image = Image.open(name_of_your_document).convert("RGB")
width, height = image.size

A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLM. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.

[[autodoc]] LayoutLMConfig

[[autodoc]] LayoutLMTokenizer - call

[[autodoc]] LayoutLMTokenizerFast - call

[[autodoc]] LayoutLMModel

[[autodoc]] LayoutLMForMaskedLM

[[autodoc]] LayoutLMForSequenceClassification

[[autodoc]] LayoutLMForTokenClassification

[[autodoc]] LayoutLMForQuestionAnswering