Swin Transformer V2
This model was released on 2021-11-18 and added to Hugging Face Transformers on 2022-07-27.
Swin Transformer V2
Section titled “Swin Transformer V2”Swin Transformer V2 is a 3B parameter model that focuses on how to scale a vision model to billions of parameters. It introduces techniques like residual-post-norm combined with cosine attention for improved training stability, log-spaced continuous position bias to better handle varying image resolutions between pre-training and fine-tuning, and a new pre-training method (SimMIM) to reduce the need for large amounts of labeled data. These improvements enable efficiently training very large models (up to 3 billion parameters) capable of processing high-resolution images.
You can find official Swin Transformer V2 checkpoints under the Microsoft organization.
import torchfrom transformers import pipeline
pipeline = pipeline( task="image-classification", model="microsoft/swinv2-tiny-patch4-window8-256", dtype=torch.float16, device=0)pipeline("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg")import torchimport requestsfrom PIL import Imagefrom transformers import AutoModelForImageClassification, AutoImageProcessor
image_processor = AutoImageProcessor.from_pretrained( "microsoft/swinv2-tiny-patch4-window8-256",)model = AutoModelForImageClassification.from_pretrained( "microsoft/swinv2-tiny-patch4-window8-256", device_map="auto")
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"image = Image.open(requests.get(url, stream=True).raw)inputs = image_processor(image, return_tensors="pt").to(model.device)
with torch.no_grad(): logits = model(**inputs).logits
predicted_class_id = logits.argmax(dim=-1).item()predicted_class_label = model.config.id2label[predicted_class_id]print(f"The predicted class label is: {predicted_class_label}")- Swin Transformer V2 can pad the inputs for any input height and width divisible by
32. - Swin Transformer V2 can be used as a backbone. When
output_hidden_states = True, it outputs bothhidden_statesandreshaped_hidden_states. Thereshaped_hidden_stateshave a shape of(batch, num_channels, height, width)rather than(batch_size, sequence_length, num_channels).
Swinv2Config
Section titled “Swinv2Config”[[autodoc]] Swinv2Config
Swinv2Model
Section titled “Swinv2Model”[[autodoc]] Swinv2Model - forward
Swinv2ForMaskedImageModeling
Section titled “Swinv2ForMaskedImageModeling”[[autodoc]] Swinv2ForMaskedImageModeling - forward
Swinv2ForImageClassification
Section titled “Swinv2ForImageClassification”[[autodoc]] transformers.Swinv2ForImageClassification - forward