• About Us
  • Contact Us
  • Terms & Conditions
  • Privacy Policy
Technology Hive
  • Home
  • Technology
  • Artificial Intelligence (AI)
  • Cyber Security
  • Machine Learning
  • More
    • Deep Learning
    • AI in Healthcare
    • AI Regulations & Policies
    • Business
    • Cloud Computing
    • Ethics & Society
No Result
View All Result
  • Home
  • Technology
  • Artificial Intelligence (AI)
  • Cyber Security
  • Machine Learning
  • More
    • Deep Learning
    • AI in Healthcare
    • AI Regulations & Policies
    • Business
    • Cloud Computing
    • Ethics & Society
No Result
View All Result
Technology Hive
No Result
View All Result
Home Technology

Data Extraction from Unstructured Sources

Linda Torries – Tech Writer & Digital Trends Analyst by Linda Torries – Tech Writer & Digital Trends Analyst
May 15, 2025
in Technology
0
Data Extraction from Unstructured Sources
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

Introduction to Vision-Enabled Language Models

In the past, extracting specific information from documents or images using traditional methods could become quickly cumbersome and frustrating, especially when the final results stray far from what you intended. The reasons for this can be diverse, ranging from overly complex document layouts to improperly formatted files, or an avalanche of visual elements, which machines struggle to interpret. However, vision-enabled language models (vLMs) have come to the rescue. Over the past months and years, these models have gained ever-greater capabilities, from rough image descriptions to detailed text extraction.

The Power of Vision-Enabled Language Models

Notably, the extraction of complex textual information from images has seen astonishing progress. This allows for rapid knowledge extraction from diverse document types without brittle, rule-based systems that break as soon as the document structure changes — and without the time-, data-, and cost-intensive specialized training of custom models. However, there is one flaw: vLMs, like their text-only counterparts, tend to produce verbose output around the information you actually want. Phrases such as “Of course, here is the information you requested” or “This is the extracted information about XYZ” commonly surround the essential content.

Description of the Issue

The example in this article describes a situation that every job applicant has likely experienced many times. After you have carefully and thoroughly created your CV, thinking about every word and maybe even every letter, you upload the file to a job portal. But after successfully uploading the file, including all the requested information, you are asked once again to fill out the same details in standard HTML forms by copying and pasting the information from your CV into the correct fields. Some companies attempt to autofill these fields based on the information extracted from your CV, but the results are often far from accurate or complete.

Code Walkthrough

In the following code, we combine Pixtral, LangChain, and Pydantic to provide a simple solution. The code extracts the first name, last name, phone number, email, and birthday from the CV if they exist. This helps keep the example simple and focuses on the technical aspects. The code can be easily adapted for other use cases or extended to extract all required information from a CV.

Importing Required Libraries

In the first step, the required libraries are imported, including:

  • os, pathlib, and typing for standard Python modules providing filesystem access and type annotations
  • base64 for encoding binary image data as text
  • dotenv to load environment variables from a .env file into os.environ
  • pydantic for defining a schema for the structured LLM output
  • ChatMistralAI from LangChain’s Mistral integration as the vision-enabled LLM interface
  • PIL for opening and resizing images
import os
import base64
from pathlib import Path
from typing import Optional
from dotenv import load_dotenv
from pydantic import BaseModel, Field
from langchain_mistralai.chat_models import ChatMistralAI
from langchain_core.messages import HumanMessage
from PIL import Image

Loading Environment Variables

Subsequently, the environment variables are loaded using load_dotenv(), and the MISTRAL_API_KEY is retrieved.

load_dotenv()
MISTRAL_API_KEY = os.getenv("MISTRAL_API_KEY")
if not MISTRAL_API_KEY:
    raise ValueError("MISTRAL_API_KEY not set in environment")

Defining the Output Schema with Pydantic

Following that, the output schema is defined using Pydantic. Pydantic is a Python library for data parsing and validation based on Python type hints. The next code block defines the structure of the expected output using Pydantic. These are the data points that the model should extract from the CV image.

class BasicCV(BaseModel):
    first_name: Optional[str] = Field(None, description="first name")
    last_name: Optional[str] = Field(None, description="last name")
    phone: Optional[str] = Field(None, description="Telephone number")
    email: Optional[str] = Field(None, description="Email address")
    birthday: Optional[str] = Field(None, description="Date of birth (e.g., YYYY-MM-DD)")

Converting Images to Base64

Subsequently, the first function is defined for the script. The function encode_image_to_base64() does exactly what its name suggests. It loads an image and converts it into a base64 string, which is passed into the vLM later.

def encode_image_to_base64(image_path: Path, upscale_factor: float = 1.0) -> str:
    with Image.open(image_path) as img:
        if upscale_factor != 1.0:
            new_size = (int(img.width * upscale_factor), int(img.height * upscale_factor))
            img = img.resize(new_size, Image.LANCZOS)
        from io import BytesIO
        buffer = BytesIO()
        img.save(buffer, format="PNG")
        image_bytes = buffer.getvalue()
        return base64.b64encode(image_bytes).decode()

Processing the CV with a Vision Language Model

Now, let’s move on to the main function of this script. The process_cv() function begins by initializing the Mistral interface using a previously generated API key. This model is then wrapped using the .with_structured_output(BasicCV) function, in which the Pydantic model defined above is passed as input.

def process_cv(
    image_path: Path,
    api_key: Optional[str] = None
) -> BasicCV:
    llm = ChatMistralAI(
        model="pixtral-12b-latest",
        mistral_api_key=api_key or MISTRAL_API_KEY,
    )
    structured_llm = llm.with_structured_output(BasicCV)
    image_b64 = encode_image_to_base64(image_path)
    data_uri = f"data:image/png;base64,{image_b64}"
    system_text = (
        "Extract only the following fields from this CV: first name, last name, "
        "telephone number, email address, and birthday. Return JSON matching the schema."
    )
    message = HumanMessage(
        content=[
            {"type": "text", "text": system_text},
            {"type": "image_url", "image_url": data_uri},
        ]
    )
    result: BasicCV = structured_llm.invoke([message])
    return result

Running the Script

This function is executed by the main, where the path is defined and the final information is printed out.

if __name__ == "__main__":
    image_file = Path("cv-test.png")
    cv_data = process_cv(image_file)
    print(f"First Name: {cv_data.first_name}")
    print(f"Last Name: {cv_data.last_name}")
    print(f"Phone: {cv_data.phone}")
    print(f"Email: {cv_data.email}")
    print(f"Birthday: {cv_data.birthday}")

Conclusion

This simple Python script provides only a first impression of how powerful and flexible vLMs have become. In combination with Pydantic and with the support of the powerful LangChain framework, vLMs can be turned into a meaningful solution for many document processing workflows, such as application processing or invoice handling.

FAQs

  • Q: What are vision-enabled language models?
    A: Vision-enabled language models (vLMs) are AI models that can understand and process visual information from images and extract relevant text or data.
  • Q: How do vLMs improve document processing?
    A: vLMs improve document processing by allowing for rapid knowledge extraction from diverse document types without brittle, rule-based systems or specialized training of custom models.
  • Q: What is Pydantic and how is it used in this script?
    A: Pydantic is a Python library for data parsing and validation. In this script, it is used to define the output schema for the extracted data from the CV, ensuring that the output matches the expected structure.
  • Q: Can this script be adapted for other use cases?
    A: Yes, the script can be easily adapted for other use cases or extended to extract all required information from a CV or other documents.
Previous Post

xAI’s Grok suddenly can’t stop bringing up “white genocide” in South Africa

Next Post

OpenAI Updates ChatGPT with GPT-4.1 Amid Model Lineup Confusion

Linda Torries – Tech Writer & Digital Trends Analyst

Linda Torries – Tech Writer & Digital Trends Analyst

Linda Torries is a skilled technology writer with a passion for exploring the latest innovations in the digital world. With years of experience in tech journalism, she has written insightful articles on topics such as artificial intelligence, cybersecurity, software development, and consumer electronics. Her writing style is clear, engaging, and informative, making complex tech concepts accessible to a wide audience. Linda stays ahead of industry trends, providing readers with up-to-date analysis and expert opinions on emerging technologies. When she's not writing, she enjoys testing new gadgets, reviewing apps, and sharing practical tech tips to help users navigate the fast-paced digital landscape.

Related Posts

Character.AI to restrict chats for under-18 users after teen death lawsuits
Technology

Character.AI to restrict chats for under-18 users after teen death lawsuits

by Linda Torries – Tech Writer & Digital Trends Analyst
October 30, 2025
MLOps Mastery with Multi-Cloud Pipeline
Technology

MLOps Mastery with Multi-Cloud Pipeline

by Linda Torries – Tech Writer & Digital Trends Analyst
October 30, 2025
Expert Panel to Decide AGI Arrival in Microsoft-OpenAI Deal
Technology

Expert Panel to Decide AGI Arrival in Microsoft-OpenAI Deal

by Linda Torries – Tech Writer & Digital Trends Analyst
October 30, 2025
Closed-Loop CNC Machining with IIoT Feedback Integration
Technology

Closed-Loop CNC Machining with IIoT Feedback Integration

by Linda Torries – Tech Writer & Digital Trends Analyst
October 30, 2025
1 million users discuss suicide with ChatGPT weekly
Technology

1 million users discuss suicide with ChatGPT weekly

by Linda Torries – Tech Writer & Digital Trends Analyst
October 30, 2025
Next Post
OpenAI Updates ChatGPT with GPT-4.1 Amid Model Lineup Confusion

OpenAI Updates ChatGPT with GPT-4.1 Amid Model Lineup Confusion

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Latest Articles

GitHub to be Absorbed into Microsoft as CEO Steps Down

GitHub to be Absorbed into Microsoft as CEO Steps Down

August 12, 2025
Aligning AI with Human Values

Aligning AI with Human Values

March 1, 2025
Hybrid Intelligence

Hybrid Intelligence

March 4, 2025

Browse by Category

  • AI in Healthcare
  • AI Regulations & Policies
  • Artificial Intelligence (AI)
  • Business
  • Cloud Computing
  • Cyber Security
  • Deep Learning
  • Ethics & Society
  • Machine Learning
  • Technology
Technology Hive

Welcome to Technology Hive, your go-to source for the latest insights, trends, and innovations in technology and artificial intelligence. We are a dynamic digital magazine dedicated to exploring the ever-evolving landscape of AI, emerging technologies, and their impact on industries and everyday life.

Categories

  • AI in Healthcare
  • AI Regulations & Policies
  • Artificial Intelligence (AI)
  • Business
  • Cloud Computing
  • Cyber Security
  • Deep Learning
  • Ethics & Society
  • Machine Learning
  • Technology

Recent Posts

  • Character.AI to restrict chats for under-18 users after teen death lawsuits
  • Chatbots Can Debunk Conspiracy Theories Surprisingly Well
  • Bending Spoons’ Acquisition of AOL Highlights Legacy Platform Value
  • The Consequential AGI Conspiracy Theory
  • MLOps Mastery with Multi-Cloud Pipeline

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Check your inbox or spam folder to confirm your subscription.

© Copyright 2025. All Right Reserved By Technology Hive.

No Result
View All Result
  • Home
  • Technology
  • Artificial Intelligence (AI)
  • Cyber Security
  • Machine Learning
  • AI in Healthcare
  • AI Regulations & Policies
  • Business
  • Cloud Computing
  • Ethics & Society
  • Deep Learning

© Copyright 2025. All Right Reserved By Technology Hive.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?