API Reference#

Client API#

The PanoOCR class is the main entry point.

PanoOCR #

PanoOCR(engine: OCREngine, perspectives: Optional[Union[PerspectivePreset, List[PerspectiveMetadata]]] = None, dedup_options: Optional[DedupOptions] = None)

Pipeline-first API for panorama OCR.

This class provides a high-level interface for running OCR on equirectangular panorama images with automatic perspective projection and deduplication.

Example

from panoocr import PanoOCR from panoocr.engines.macocr import MacOCREngine

engine = MacOCREngine() pano = PanoOCR(engine) result = pano.recognize("panorama.jpg") result.save_json("results.json")

Attributes:

Name Type Description
engine

The OCR engine to use for text recognition.

perspectives

List of perspective configurations.

dedup_options

Deduplication options.

Initialize PanoOCR.

Parameters:

Name Type Description Default
engine OCREngine

OCR engine implementing the OCREngine protocol.

required
perspectives Optional[Union[PerspectivePreset, List[PerspectiveMetadata]]]

Perspective configuration - either a preset name or custom list of PerspectiveMetadata. Defaults to DEFAULT.

None
dedup_options Optional[DedupOptions]

Deduplication options. Uses defaults if not provided.

None
Source code in src/panoocr/api/client.py
def __init__(
    self,
    engine: OCREngine,
    perspectives: Optional[Union[PerspectivePreset, List[PerspectiveMetadata]]] = None,
    dedup_options: Optional[DedupOptions] = None,
):
    """Initialize PanoOCR.

    Args:
        engine: OCR engine implementing the OCREngine protocol.
        perspectives: Perspective configuration - either a preset name or
            custom list of PerspectiveMetadata. Defaults to DEFAULT.
        dedup_options: Deduplication options. Uses defaults if not provided.
    """
    self.engine = engine

    # Set up perspectives
    if perspectives is None:
        self.perspectives = DEFAULT_IMAGE_PERSPECTIVES
        self._preset_name = PerspectivePreset.DEFAULT.value
    elif isinstance(perspectives, PerspectivePreset):
        self.perspectives = _get_perspectives_for_preset(perspectives)
        self._preset_name = perspectives.value
    else:
        self.perspectives = perspectives
        self._preset_name = "custom"

    # Set up deduplication
    self.dedup_options = dedup_options or DedupOptions()
    self._dedup_engine = SphereOCRDuplicationDetectionEngine(
        min_text_similarity=self.dedup_options.min_text_similarity,
        min_intersection_ratio_for_similar_text=self.dedup_options.min_intersection_ratio_for_similar_text,
        min_text_overlap=self.dedup_options.min_text_overlap,
        min_intersection_ratio_for_overlapping_text=self.dedup_options.min_intersection_ratio_for_overlapping_text,
        min_intersection_ratio=self.dedup_options.min_intersection_ratio,
    )

recognize #

recognize(image: Union[str, Image], panorama_id: Optional[str] = None, show_progress: bool = True) -> OCRResult

Run OCR on a panorama image.

Parameters:

Name Type Description Default
image Union[str, Image]

Path to panorama image or PIL Image.

required
panorama_id Optional[str]

Optional identifier for the panorama.

None
show_progress bool

Whether to show a progress bar.

True

Returns:

Type Description
OCRResult

OCRResult containing deduplicated sphere OCR results.

Source code in src/panoocr/api/client.py
def recognize(
    self,
    image: Union[str, Image.Image],
    panorama_id: Optional[str] = None,
    show_progress: bool = True,
) -> OCRResult:
    """Run OCR on a panorama image.

    Args:
        image: Path to panorama image or PIL Image.
        panorama_id: Optional identifier for the panorama.
        show_progress: Whether to show a progress bar.

    Returns:
        OCRResult containing deduplicated sphere OCR results.
    """
    # Get image path for result metadata
    image_path = image if isinstance(image, str) else None
    if panorama_id is None:
        panorama_id = image_path or "panorama"

    # Load panorama
    pano = PanoramaImage(panorama_id=panorama_id, image=image)

    # Run OCR on each perspective
    all_sphere_results: List[List[SphereOCRResult]] = []

    perspective_iter = self.perspectives
    if show_progress:
        perspective_iter = tqdm(
            self.perspectives,
            desc="Processing perspectives",
            unit="perspective",
        )

    for perspective in perspective_iter:
        # Generate perspective view
        persp_image = pano.generate_perspective_image(perspective)

        # Run OCR
        flat_results = self.engine.recognize(persp_image.get_perspective_image())

        # Convert to sphere coordinates
        sphere_results = [
            result.to_sphere(
                horizontal_fov=perspective.horizontal_fov,
                vertical_fov=perspective.vertical_fov,
                yaw_offset=perspective.yaw_offset,
                pitch_offset=perspective.pitch_offset,
            )
            for result in flat_results
        ]

        all_sphere_results.append(sphere_results)

    # Deduplicate across adjacent perspectives
    deduplicated = self._deduplicate_results(all_sphere_results)

    return OCRResult(
        results=deduplicated,
        image_path=image_path,
        perspective_preset=self._preset_name,
    )

recognize_multi #

recognize_multi(image: Union[str, Image], presets: Sequence[PerspectivePreset], panorama_id: Optional[str] = None, show_progress: bool = True) -> OCRResult

Run OCR on a panorama using multiple perspective presets.

Useful for multi-scale detection to catch both small and large text.

Parameters:

Name Type Description Default
image Union[str, Image]

Path to panorama image or PIL Image.

required
presets Sequence[PerspectivePreset]

List of perspective presets to use.

required
panorama_id Optional[str]

Optional identifier for the panorama.

None
show_progress bool

Whether to show a progress bar.

True

Returns:

Type Description
OCRResult

OCRResult containing deduplicated sphere OCR results.

Source code in src/panoocr/api/client.py
def recognize_multi(
    self,
    image: Union[str, Image.Image],
    presets: Sequence[PerspectivePreset],
    panorama_id: Optional[str] = None,
    show_progress: bool = True,
) -> OCRResult:
    """Run OCR on a panorama using multiple perspective presets.

    Useful for multi-scale detection to catch both small and large text.

    Args:
        image: Path to panorama image or PIL Image.
        presets: List of perspective presets to use.
        panorama_id: Optional identifier for the panorama.
        show_progress: Whether to show a progress bar.

    Returns:
        OCRResult containing deduplicated sphere OCR results.
    """
    # Get image path for result metadata
    image_path = image if isinstance(image, str) else None
    if panorama_id is None:
        panorama_id = image_path or "panorama"

    # Combine perspectives from all presets
    combined_perspectives = combine_perspectives(
        *[_get_perspectives_for_preset(preset) for preset in presets]
    )

    # Temporarily swap perspectives
    original_perspectives = self.perspectives
    original_preset_name = self._preset_name

    self.perspectives = combined_perspectives
    self._preset_name = None

    try:
        result = self.recognize(
            image=image,
            panorama_id=panorama_id,
            show_progress=show_progress,
        )
        # Update result with multi-preset info
        return OCRResult(
            results=result.results,
            image_path=image_path,
            perspective_preset=None,
            perspective_presets=[preset.value for preset in presets],
        )
    finally:
        # Restore original perspectives
        self.perspectives = original_perspectives
        self._preset_name = original_preset_name

OCRResult dataclass #

OCRResult(results: Sequence[SphereOCRResult], image_path: Optional[str] = None, perspective_preset: Optional[str] = None, perspective_presets: Optional[Sequence[str]] = None)

OCR output plus metadata, with preview-tool-friendly JSON export.

Attributes:

Name Type Description
results Sequence[SphereOCRResult]

List of deduplicated sphere OCR results.

image_path Optional[str]

Optional path to the source image.

perspective_preset Optional[str]

Name of the perspective preset used.

perspective_presets Optional[Sequence[str]]

List of perspective preset names if multiple were used.

to_dict #

to_dict() -> dict

Convert to a dictionary for JSON serialization.

Source code in src/panoocr/api/models.py
def to_dict(self) -> dict:
    """Convert to a dictionary for JSON serialization."""
    return {
        "image_path": self.image_path,
        "perspective_preset": self.perspective_preset,
        "perspective_presets": list(self.perspective_presets)
        if self.perspective_presets is not None
        else None,
        "results": [r.to_dict() for r in self.results],
    }

save_json #

save_json(path: str) -> None

Save OCR results in a JSON file suitable for the preview tool.

Parameters:

Name Type Description Default
path str

Output file path.

required
Source code in src/panoocr/api/models.py
def save_json(self, path: str) -> None:
    """Save OCR results in a JSON file suitable for the preview tool.

    Args:
        path: Output file path.
    """
    with open(path, "w") as f:
        json.dump(self.to_dict(), f, indent=2)

from_dict classmethod #

from_dict(data: dict) -> 'OCRResult'

Create an OCRResult from a dictionary.

Parameters:

Name Type Description Default
data dict

Dictionary with OCR result data.

required

Returns:

Type Description
'OCRResult'

OCRResult instance.

Source code in src/panoocr/api/models.py
@classmethod
def from_dict(cls, data: dict) -> "OCRResult":
    """Create an OCRResult from a dictionary.

    Args:
        data: Dictionary with OCR result data.

    Returns:
        OCRResult instance.
    """
    results = [SphereOCRResult.from_dict(r) for r in data.get("results", [])]
    return cls(
        results=results,
        image_path=data.get("image_path"),
        perspective_preset=data.get("perspective_preset"),
        perspective_presets=data.get("perspective_presets"),
    )

load_json classmethod #

load_json(path: str) -> 'OCRResult'

Load OCR results from a JSON file.

Parameters:

Name Type Description Default
path str

Input file path.

required

Returns:

Type Description
'OCRResult'

OCRResult instance.

Source code in src/panoocr/api/models.py
@classmethod
def load_json(cls, path: str) -> "OCRResult":
    """Load OCR results from a JSON file.

    Args:
        path: Input file path.

    Returns:
        OCRResult instance.
    """
    with open(path, "r") as f:
        data = json.load(f)
    return cls.from_dict(data)

PerspectivePreset #

Bases: str, Enum

Pre-defined perspective configurations for common text scales.

OCROptions dataclass #

OCROptions(config: dict | None = None)

Options passed to the underlying OCR engine.

Attributes:

Name Type Description
config dict | None

Engine-specific configuration dictionary.

DedupOptions dataclass #

DedupOptions(min_text_similarity: float = 0.5, min_intersection_ratio_for_similar_text: float = 0.5, min_text_overlap: float = 0.5, min_intersection_ratio_for_overlapping_text: float = 0.15, min_intersection_ratio: float = 0.1)

Deduplication options applied after multi-view OCR.

Attributes:

Name Type Description
min_text_similarity float

Minimum Levenshtein similarity for text comparison.

min_intersection_ratio_for_similar_text float

Minimum region overlap for similar texts.

min_text_overlap float

Minimum overlap similarity for text comparison.

min_intersection_ratio_for_overlapping_text float

Minimum region overlap for overlapping texts.

min_intersection_ratio float

Minimum region intersection ratio threshold.

Module Structure#

panoocr/
├── api/              # Client API
│   ├── client.py     # PanoOCR
│   └── models.py     # OCRResult, options, OCREngine protocol
├── engines/          # OCR engines (lazily imported)
│   ├── macocr.py     # MacOCREngine (requires [macocr])
│   ├── easyocr.py    # EasyOCREngine (requires [easyocr])
│   ├── paddleocr.py  # PaddleOCREngine (requires [paddleocr])
│   ├── florence2.py  # Florence2OCREngine (requires [florence2])
│   └── trocr.py      # TrOCREngine (requires [trocr])
├── ocr/              # OCR result models
│   ├── models.py     # FlatOCRResult, SphereOCRResult
│   └── utils.py      # Visualization (requires [viz])
├── dedup/            # Deduplication
│   └── detection.py  # SphereOCRDuplicationDetectionEngine
├── image/            # Panorama handling
│   ├── models.py     # PanoramaImage, PerspectiveMetadata
│   └── perspectives.py  # Presets, generate_perspectives()
└── geometry.py       # Coordinate conversion utilities

Submodules#