LayoutLM
This model was released on 2019-12-31 and added to Hugging Face Transformers on 2020-11-16.
LayoutLM
Section titled “LayoutLM”LayoutLM jointly learns text and the document layout rather than focusing only on text. It incorporates positional layout information and visual features of words from the document images.
You can find all the original LayoutLM checkpoints under the LayoutLM collection.
The example below demonstrates question answering with the AutoModel class.
import torchfrom datasets import load_datasetfrom transformers import AutoTokenizer, LayoutLMForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("impira/layoutlm-document-qa", add_prefix_space=True)model = LayoutLMForQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", dtype=torch.float16)
dataset = load_dataset("nielsr/funsd", split="train")example = dataset[0]question = "what's his name?"words = example["words"]boxes = example["bboxes"]
encoding = tokenizer( question.split(), words, is_split_into_words=True, return_token_type_ids=True, return_tensors="pt")bbox = []for i, s, w in zip(encoding.input_ids[0], encoding.sequence_ids(0), encoding.word_ids(0)): if s == 1: bbox.append(boxes[w]) elif i == tokenizer.sep_token_id: bbox.append([1000] * 4) else: bbox.append([0] * 4)encoding["bbox"] = torch.tensor([bbox])
word_ids = encoding.word_ids(0)outputs = model(**encoding)loss = outputs.lossstart_scores = outputs.start_logitsend_scores = outputs.end_logitsstart, end = word_ids[start_scores.argmax(-1)], word_ids[end_scores.argmax(-1)]print(" ".join(words[start : end + 1]))-
The original LayoutLM was not designed with a unified processing workflow. Instead, it expects preprocessed text (
words) and bounding boxes (boxes) from an external OCR engine (like Pytesseract) and provide them as additional inputs to the tokenizer. -
The
forwardmethod expects the inputbbox(bounding boxes of the input tokens). Each bounding box should be in the format(x0, y0, x1, y1).(x0, y0)corresponds to the upper left corner of the bounding box and{x1, y1)corresponds to the lower right corner. The bounding boxes need to be normalized on a 0-1000 scale as shown below.
def normalize_bbox(bbox, width, height): return [ int(1000 * (bbox[0] / width)), int(1000 * (bbox[1] / height)), int(1000 * (bbox[2] / width)), int(1000 * (bbox[3] / height)), ]widthandheightcorrespond to the width and height of the original document in which the token occurs. These values can be obtained as shown below.
from PIL import Image
# Document can be a png, jpg, etc. PDFs must be converted to images.image = Image.open(name_of_your_document).convert("RGB")
width, height = image.sizeResources
Section titled “Resources”A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLM. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
- Read fine-tuning LayoutLM for document-understanding using Keras & Hugging Face Transformers to learn more.
- Read fine-tune LayoutLM for document-understanding using only Hugging Face Transformers for more information.
- Refer to this notebook for a practical example of how to fine-tune LayoutLM.
- Refer to this notebook for an example of how to fine-tune LayoutLM for sequence classification.
- Refer to this notebook for an example of how to fine-tune LayoutLM for token classification.
- Read Deploy LayoutLM with Hugging Face Inference Endpoints to learn how to deploy LayoutLM.
LayoutLMConfig
Section titled “LayoutLMConfig”[[autodoc]] LayoutLMConfig
LayoutLMTokenizer
Section titled “LayoutLMTokenizer”[[autodoc]] LayoutLMTokenizer - call
LayoutLMTokenizerFast
Section titled “LayoutLMTokenizerFast”[[autodoc]] LayoutLMTokenizerFast - call
LayoutLMModel
Section titled “LayoutLMModel”[[autodoc]] LayoutLMModel
LayoutLMForMaskedLM
Section titled “LayoutLMForMaskedLM”[[autodoc]] LayoutLMForMaskedLM
LayoutLMForSequenceClassification
Section titled “LayoutLMForSequenceClassification”[[autodoc]] LayoutLMForSequenceClassification
LayoutLMForTokenClassification
Section titled “LayoutLMForTokenClassification”[[autodoc]] LayoutLMForTokenClassification
LayoutLMForQuestionAnswering
Section titled “LayoutLMForQuestionAnswering”[[autodoc]] LayoutLMForQuestionAnswering