
PaddleOCR
STDIOComprehensive OCR toolkit with text recognition, document parsing, and information extraction
Comprehensive OCR toolkit with text recognition, document parsing, and information extraction
English | 简体中文 | 繁體中文 | 日本語 | 한국어 | Français | Русский | Español | العربية
PaddleOCR is an industry-leading, production-ready OCR and document AI engine, offering end-to-end solutions from text extraction to intelligent document understanding
[!TIP] PaddleOCR now provides an MCP server that supports integration with Agent applications like Claude Desktop. For details, please refer to PaddleOCR MCP Server.
The PaddleOCR 3.0 Technical Report is now available. See details at: PaddleOCR 3.0 Technical Report
PaddleOCR converts documents and images into structured, AI-friendly data (like JSON and Markdown) with industry-leading accuracy—powering AI applications for everyone from indie developers and startups to large enterprises worldwide. With over 50,000 stars and deep integration into leading projects like MinerU, RAGFlow, and OmniParser, PaddleOCR has become the premier solution for developers building intelligent document applications in the AI era.
PP-OCRv5 — Universal Scene Text Recognition
Single model supports five text types (Simplified Chinese, Traditional Chinese, English, Japanese, and Pinyin) with 13% accuracy improvement. Solves multilingual mixed document recognition challenges.
PP-StructureV3 — Complex Document Parsing
Intelligently converts complex PDFs and document images into Markdown and JSON files that preserve original structure. Outperforms numerous commercial solutions in public benchmarks. Perfectly maintains document layout and hierarchical structure.
PP-ChatOCRv4 — Intelligent Information Extraction
Natively integrates ERNIE 4.5 to precisely extract key information from massive documents, with 15% accuracy improvement over previous generation. Makes documents "understand" your questions and provide accurate answers.
In addition to providing an outstanding model library, PaddleOCR 3.0 also offers user-friendly tools covering model training, inference, and service deployment, so developers can rapidly bring AI applications to production.
Special Note: PaddleOCR 3.x introduces several significant interface changes. Old code written based on PaddleOCR 2.x is likely incompatible with PaddleOCR 3.x. Please ensure that the documentation you are reading matches the version of PaddleOCR you are using. This document explains the reasons for the upgrade and the major changes from PaddleOCR 2.x to 3.x.
Significant Model Additions:
Deployment Capability Upgrades:
Benchmark Support:
Bug Fixes:
use_chart_parsing
) in the PP-StructureV3 configuration files compared to other pipelines.Other Enhancements:
Bug Fixes:
save_vector
, save_visual_info_list
, load_vector
, and load_visual_info_list
in the PP-ChatOCRv4
class.glossary
and llm_request_interval
to the translate
method in the PPDocTranslation
class.Documentation Improvements:
Others:
puremagic
instead of python-magic
to reduce installation issues.Key Models and Pipelines:
New MCP server: Details
Documentation Optimization: Improved the descriptions in some user guides for a smoother reading experience.
The default download source has been changed from BOS
to HuggingFace
. Users can also change the environment variable PADDLE_PDX_MODEL_SOURCE
to BOS
to set the model download source back to Baidu Object Storage (BOS).
Added service invocation examples for six languages—C++, Java, Go, C#, Node.js, and PHP—for pipelines like PP-OCRv5, PP-StructureV3, and PP-ChatOCRv4.
Improved the layout partition sorting algorithm in the PP-StructureV3 pipeline, enhancing the sorting logic for complex vertical layouts to deliver better results.
Enhanced model selection logic: when a language is specified but a model version is not, the system will automatically select the latest model version supporting that language.
Set a default upper limit for MKL-DNN cache size to prevent unlimited growth, while also allowing users to configure cache capacity.
Updated default configurations for high-performance inference to support Paddle MKL-DNN acceleration and optimized the logic for automatic configuration selection for smarter choices.
Adjusted the logic for obtaining the default device to consider the actual support for computing devices by the installed Paddle framework, making program behavior more intuitive.
Added Android example for PP-OCRv5. Details.
Bug Fixes:
export_paddlex_config_to_yaml
would not function correctly in certain cases.save_path
and its documentation description.overlap_ratio
under extremely special circumstances in the PP-StructureV3 pipeline.Documentation Improvements:
enable_mkldnn
parameter in the documentation to accurately reflect the program's actual behavior.lang
and ocr_version
parameters.Others:
2025.06.05: PaddleOCR 3.0.1 Released, includes:
limit_side_len
in the configuration has been changed from 736 to 64.PP-LCNet_x1_0_textline_ori
with an accuracy of 99.42%. The default text line orientation classifier for OCR, PP-StructureV3, and PP-ChatOCRv4 pipelines has been updated to this model.PP-LCNet_x0_25_textline_ori
, improving accuracy by 3.3 percentage points to a current accuracy of 98.85%.🔥🔥2025.05.20: Official Release of PaddleOCR v3.0, including:
PP-OCRv5: High-Accuracy Text Recognition Model for All Scenarios - Instant Text from Images/PDFs.
PP-StructureV3: General-Purpose Document Parsing – Unleash SOTA Images/PDFs Parsing for Real-World Scenarios!
PP-ChatOCRv4: Intelligent Document Understanding – Extract Key Information, not just text from Images/PDFs.
Install PaddlePaddle refer to Installation Guide, after then, install the PaddleOCR toolkit.
# If you only want to use the basic text recognition feature (returns text position coordinates and content), including the PP-OCR series python -m pip install paddleocr # If you want to use all features such as document parsing, document understanding, document translation, key information extraction, etc. # python -m pip install "paddleocr[all]"
Starting from version 3.2.0, in addition to the all
dependency group demonstrated above, PaddleOCR also supports installing partial optional features by specifying other dependency groups. All dependency groups provided by PaddleOCR are as follows:
Dependency Group Name | Corresponding Functionality |
---|---|
doc-parser | Document parsing: can be used to extract layout elements such as tables, formulas, stamps, images, etc. from documents; includes models like PP-StructureV3 |
ie | Information extraction: can be used to extract key information from documents, such as names, dates, addresses, amounts, etc.; includes models like PP-ChatOCRv4 |
trans | Document translation: can be used to translate documents from one language to another; includes models like PP-DocTranslation |
all | Complete functionality |
# Run PP-OCRv5 inference paddleocr ocr -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png --use_doc_orientation_classify False --use_doc_unwarping False --use_textline_orientation False # Run PP-StructureV3 inference paddleocr pp_structurev3 -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/pp_structure_v3_demo.png --use_doc_orientation_classify False --use_doc_unwarping False # Get the Qianfan API Key at first, and then run PP-ChatOCRv4 inference paddleocr pp_chatocrv4_doc -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_certificate-1.png -k 驾驶室准乘人数 --qianfan_api_key your_api_key --use_doc_orientation_classify False --use_doc_unwarping False # Get more information about "paddleocr ocr" paddleocr ocr --help
4.1 PP-OCRv5 Example
# Initialize PaddleOCR instance from paddleocr import PaddleOCR ocr = PaddleOCR( use_doc_orientation_classify=False, use_doc_unwarping=False, use_textline_orientation=False) # Run OCR inference on a sample image result = ocr.predict( input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_002.png") # Visualize the results and save the JSON results for res in result: res.print() res.save_to_img("output") res.save_to_json("output")
from pathlib import Path from paddleocr import PPStructureV3 pipeline = PPStructureV3( use_doc_orientation_classify=False, use_doc_unwarping=False ) # For Image output = pipeline.predict( input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/pp_structure_v3_demo.png", ) # Visualize the results and save the JSON results for res in output: res.print() res.save_to_json(save_path="output") res.save_to_markdown(save_path="output")
from paddleocr import PPChatOCRv4Doc chat_bot_config = { "module_name": "chat_bot", "model_name": "ernie-3.5-8k", "base_url": "https://qianfan.baidubce.com/v2", "api_type": "openai", "api_key": "api_key", # your api_key } retriever_config = { "module_name": "retriever", "model_name": "embedding-v1", "base_url": "https://qianfan.baidubce.com/v2", "api_type": "qianfan", "api_key": "api_key", # your api_key } pipeline = PPChatOCRv4Doc( use_doc_orientation_classify=False, use_doc_unwarping=False ) visual_predict_res = pipeline.visual_predict( input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_certificate-1.png", use_common_ocr=True, use_seal_recognition=True, use_table_recognition=True, ) mllm_predict_info = None use_mllm = False # If a multimodal large model is used, the local mllm service needs to be started. You can refer to the documentation: https://github.com/PaddlePaddle/PaddleX/blob/release/3.0/docs/pipeline_usage/tutorials/vlm_pipelines/doc_understanding.en.md performs deployment and updates the mllm_chat_bot_config configuration. if use_mllm: mllm_chat_bot_config = { "module_name": "chat_bot", "model_name": "PP-DocBee", "base_url": "http://127.0.0.1:8080/", # your local mllm service url "api_type": "openai", "api_key": "api_key", # your api_key } mllm_predict_res = pipeline.mllm_pred( input="https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/vehicle_certificate-1.png", key_list=["驾驶室准乘人数"], mllm_chat_bot_config=mllm_chat_bot_config, ) mllm_predict_info = mllm_predict_res["mllm_res"] visual_info_list = [] for res in visual_predict_res: visual_info_list.append(res["visual_info"]) layout_parsing_result = res["layout_parsing_result"] vector_info = pipeline.build_vector( visual_info_list, flag_save_bytes_vector=True, retriever_config=retriever_config ) chat_result = pipeline.chat( key_list=["驾驶室准乘人数"], visual_info=visual_info_list, vector_info=vector_info, mllm_predict_info=mllm_predict_info, chat_bot_config=chat_bot_config, retriever_config=retriever_config, ) print(chat_result)
⭐ Star this repository to keep up with exciting updates and new releases, including powerful OCR and document parsing capabilities! ⭐
PaddlePaddle WeChat official account | Join the tech discussion group |
---|---|
![]() | ![]() |
PaddleOCR wouldn't be where it is today without its incredible community! 💗 A massive thank you to all our longtime partners, new collaborators, and everyone who's poured their passion into PaddleOCR — whether we've named you or not. Your support fuels our fire!
Project Name | Description |
---|---|
RAGFlow | RAG engine based on deep document understanding. |
MinerU | Multi-type Document to Markdown Conversion Tool |
Umi-OCR | Free, Open-source, Batch Offline OCR Software. |
OmniParser | OmniParser: Screen Parsing tool for Pure Vision Based GUI Agent. |
QAnything | Question and Answer based on Anything. |
PDF-Extract-Kit | A powerful open-source toolkit designed to efficiently extract high-quality content from complex and diverse PDF documents. |
Dango-Translator | Recognize text on the screen, translate it and show the translation results in real time. |
Learn more projects | More projects based on PaddleOCR |
This project is released under the Apache 2.0 license.
@misc{cui2025paddleocr30technicalreport,
title={PaddleOCR 3.0 Technical Report},
author={Cheng Cui and Ting Sun and Manhui Lin and Tingquan Gao and Yubo Zhang and Jiaxuan Liu and Xueqing Wang and Zelun Zhang and Changda Zhou and Hongen Liu and Yue Zhang and Wenyu Lv and Kui Huang and Yichao Zhang and Jing Zhang and Jun Zhang and Yi Liu and Dianhai Yu and Yanjun Ma},
year={2025},
eprint={2507.05595},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2507.05595},
}