Jump to content

Resume Renamer 260120: Difference between revisions

From Game in the Brain Wiki
No edit summary
No edit summary
 
(4 intermediate revisions by the same user not shown)
Line 2: Line 2:
Students and applicants rarely follow file naming conventions. You likely have a folder that looks like this:
Students and applicants rarely follow file naming conventions. You likely have a folder that looks like this:


<code>Resume.pdf</code>
Resume.pdf


<code>CV_Final_v2.docx</code>
CV_Final_v2.docx


<code>MyResume(1).pdf</code>
MyResume(1).pdf


<code>john_doe.pdf</code>
john_doe.pdf


This makes sorting by date or qualification impossible without opening every single file.
This makes sorting by date or qualification impossible without opening every single file.


'''The Goal:''' Automatically rename these files based on their '''content''' to a standard format:
'''The Goal:''' Automatically rename these files based on their '''content''' to a standard format:
: <code>YYMMDD Name Degree/Background.pdf</code>
: YYMMDD Name Degree/Background.pdf
: ''Example:'' <code>250101 Juan Dela Cruz BS Information Technology.pdf</code>
: ''Example:'' 250101 Juan Dela Cruz BS Information Technology.pdf


== 2. Requirements Checklist ==
== 2. Requirements Checklist ==
Line 25: Line 25:
[ ] '''Ollama''' installed locally (The AI engine).
[ ] '''Ollama''' installed locally (The AI engine).


[ ] '''A Small Language Model''' pulled (e.g., <code>granite3.3:2b</code> or <code>llama3.2</code>).
[ ] '''A Small Language Model''' pulled (e.g., granite3.3:2b or llama3.2).
*: ''Note: Small models are fast but can make mistakes. The script has logic to catch these, but a human review is always recommended.''
*: ''Note: Small models are fast but can make mistakes. The script has logic to catch these, but a human review is always recommended.''


[ ] '''Python Libraries:''' <code>pdfplumber</code> (for PDFs), <code>python-docx</code> (for Word), <code>requests</code> (to talk to Ollama).
[ ] '''Python Libraries:''' pdfplumber (for PDFs), python-docx (for Word), requests (to talk to Ollama).


[ ] '''No Images:''' The files must have '''embedded text'''. This script excludes OCR (Optical Character Recognition) to keep it fast and lightweight. Pure image scans will be skipped.
[ ] '''No Images:''' The files must have '''embedded text'''. This script excludes OCR (Optical Character Recognition) to keep it fast and lightweight. Pure image scans will be skipped.
Line 37: Line 37:
'''File Discovery:'''
'''File Discovery:'''


#* The script looks for <code>.pdf</code> and <code>.docx</code> files in the folder where the script is located.
#* The script looks for .pdf and .docx files in the folder where the script is located.


'''Text Extraction:'''
'''Text Extraction:'''
Line 47: Line 47:
#* '''Logic:''' It scans the text for '''explicit years''' (e.g., "2023", "2024").
#* '''Logic:''' It scans the text for '''explicit years''' (e.g., "2023", "2024").
#* '''Rule:''' It ignores the word "Present". Why? If a resume from 2022 says "2022 - Present", treating "Present" as "Today" (2026) would incorrectly date the old resume. We stick to the highest printed number.
#* '''Rule:''' It ignores the word "Present". Why? If a resume from 2022 says "2022 - Present", treating "Present" as "Today" (2026) would incorrectly date the old resume. We stick to the highest printed number.
#* '''Output:''' Sets the date to Jan 1st of the highest year found (e.g., <code>240101</code>).
#* '''Output:''' Sets the date to Jan 1st of the highest year found (e.g., 240101).


'''The Content Specialist (Ollama AI):'''
'''The Content Specialist (Ollama AI):'''
Line 57: Line 57:
'''Sanitization & Renaming:'''
'''Sanitization & Renaming:'''


#* It fixes "Spaced Names" (e.g., <code>J O H N</code> -> <code>John</code>).
#* It fixes "Spaced Names" (e.g., J O H N -> John).
#* It ensures the filename isn't too long.
#* It ensures the filename isn't too long.
#* It renames the file only if the name doesn't already exist.
#* It renames the file only if the name doesn't already exist.


== 4. Installation Guide (Ubuntu 24.04) ==
== 4. Installation Guide (Ubuntu 24.04) ==
Open your terminal (<code>Ctrl+Alt+T</code>) and follow these steps exactly.
Open your terminal (Ctrl+Alt+T) and follow these steps exactly.


=== Step A: System Update ===
=== Step A: System Update ===
Ensure your system tools are fresh to avoid installation conflicts.
Ensure your system tools are fresh to avoid installation conflicts.


<pre>
<syntaxhighlight lang="bash">
sudo apt update && sudo apt upgrade -y
sudo apt update && sudo apt upgrade -y
</pre>
</syntaxhighlight>


=== Step B: Install Ollama & The Model ===
=== Step B: Install Ollama & The Model ===
Line 75: Line 75:
'''Install the Ollama Engine:'''
'''Install the Ollama Engine:'''


#:<pre>curl -fsSL https://ollama.com/install.sh | sh</pre>
#:<syntaxhighlight lang="bash">curl -fsSL https://ollama.com/install.sh | sh</syntaxhighlight>


'''Download the Brain (The Model):'''
'''Download the Brain (The Model):'''


#:We use <code>granite3.3:2b</code> because it is very fast.
#:We use granite3.3:2b because it is very fast.
#:<pre>ollama pull granite3.3:2b</pre>
#:<syntaxhighlight lang="bash">ollama pull granite3.3:2b</syntaxhighlight>


=== Step C: Setup Python Environment ===
=== Step C: Setup Python Environment ===
Ubuntu 24.04 requires Virtual Environments (<code>venv</code>) for Python scripts.
Ubuntu 24.04 requires Virtual Environments (venv) for Python scripts.


'''Create a Project Folder:'''
'''Create a Project Folder:'''


#:<pre>
#:<syntaxhighlight lang="bash">
mkdir ~/resume_renamer
mkdir ~/resume_renamer
cd ~/resume_renamer
cd ~/resume_renamer
</pre>
</syntaxhighlight>


'''Create the Virtual Environment:'''
'''Create the Virtual Environment:'''


#:<pre>python3 -m venv venv</pre>
#:<syntaxhighlight lang="bash">python3 -m venv venv</syntaxhighlight>


'''Activate the Environment:'''
'''Activate the Environment:'''


#:<pre>source venv/bin/activate</pre>
#:<syntaxhighlight lang="bash">source venv/bin/activate</syntaxhighlight>
#:(You should see <code>(venv)</code> at the start of your command line now).
#:(You should see (venv) at the start of your command line now).


'''Install Required Libraries:'''
'''Install Required Libraries:'''


#:<pre>pip install requests pdfplumber python-docx</pre>
#:<syntaxhighlight lang="bash">pip install requests pdfplumber python-docx</syntaxhighlight>


=== Step D: Create the Script ===
=== Step D: Create the Script ===
Line 109: Line 109:
Create the python file:
Create the python file:


#:<pre>nano rename_resumes.py</pre>
#:<syntaxhighlight lang="bash">nano rename_resumes.py</syntaxhighlight>


'''Paste the Python code''' provided in the appendix below.
'''Paste the Python code''' provided in the appendix below.


Save and exit: Press <code>Ctrl+O</code>, <code>Enter</code>, then <code>Ctrl+X</code>.
Save and exit: Press Ctrl+O, Enter, then Ctrl+X.


== 5. Running the Renamer ==
== 5. Running the Renamer ==
This script is '''portable'''. It works on the files sitting next to it.
This script is '''portable'''. It works on the files sitting next to it.


'''Copy the Script:''' Move the <code>rename_resumes.py</code> file into your folder full of PDFs (e.g., <code>~/Documents/Student_CVs</code>).
'''Copy the Script:''' Move the rename_resumes.py file into your folder full of PDFs (e.g., ~/Documents/Student_CVs).


'''Open Terminal in that folder:'''
'''Open Terminal in that folder:'''


#:<pre>cd ~/Documents/Student_CVs</pre>
#:<syntaxhighlight lang="bash">cd ~/Documents/Student_CVs</syntaxhighlight>


'''Activate your Python Environment (Point to where you created it):'''
'''Activate your Python Environment (Point to where you created it):'''


#:<pre>source ~/resume_renamer/venv/bin/activate</pre>
#:<syntaxhighlight lang="bash">source ./venv/bin/activate</syntaxhighlight>


'''Run the script:'''
'''Run the script:'''


#:<pre>python3 rename_resumes.py</pre>
#:<syntaxhighlight lang="bash">python3 rename_resumes.py</syntaxhighlight>


== 6. Common Errors & Troubleshooting ==
== 6. Common Errors & Troubleshooting ==
Line 148: Line 148:


== Appendix: The Python Script ==
== Appendix: The Python Script ==
Copy the code below into <code>rename_resumes.py</code>.


<pre>
=== Rename Resumes Script ===
import os
Copy the code below into rename_resumes.py.
import requests
import json
import pdfplumber
import re
from datetime import datetime
import time


--- OPTIONAL DEPENDENCY: python-docx ---
<syntaxhighlight lang="python">
# --- IMPROVED FUNCTION: SMART PDF READER (Skips Forms & Signature Pages) ---
def get_smart_pdf_text(filepath):
    """
    Reads PDF pages but SKIPS pages that look like 'Application Forms'.
    Returns the text of the first 2 'valid' resume pages found.
    """
    valid_text = ""
    pages_read = 0
   
    # Phrases that indicate a page is a FORM, not a Resume
    skip_phrases = [
        "APPLICATION FOR EMPLOYMENT",
        "OFFICIAL USE ONLY",
        "DO NOT WRITE BELOW THIS LINE",
        "PERSONAL DATA SHEET",
        "APPLICANT'S SIGNATURE",  # Found on Page 2 of your file
        "FAMILY BACKGROUND"        # Found on Page 2 of your file
    ]


DOCX_AVAILABLE = False
    try:
try:
        with pdfplumber.open(filepath) as pdf:
from docx import Document
            for page in pdf.pages:
DOCX_AVAILABLE = True
                text = page.extract_text() or ""
except ImportError:
               
print("Warning: 'python-docx' not found. .docx files will be skipped.")
                # CHECK: Is this page just a form?
print("To support Word docs, run: pip install python-docx")
                # We check if ANY of the skip phrases appear in the text
                is_form = any(phrase in text.upper() for phrase in skip_phrases)
               
                if is_form:
                    print(f"   [INFO] Skipped a 'Form' page (found key phrase)...")
                    continue  # Skip this page, check the next one
               
                # If not a form, it's likely the resume. Keep it.
                valid_text += text + "\n"
                pages_read += 1
               
                # Stop after finding 2 valid pages of resume content
                if pages_read >= 2:
                    break
                   
    except Exception as e:
        print(f"   [ERROR] PDF Read Error: {e}")
        return ""
       
    return valid_text
# --------------------------------------
</syntaxhighlight>


--- CONFIGURATION ---
=== Ocr Converter Script ===
Copy the code below into ocr_converter.py. Of course the Renamer doesnt work with Image PDFs, so you have to convert this. Also this is only as good as the VISION model used. <syntaxhighlight lang="bash">python3 ocr_converter.py</syntaxhighlight><syntaxhighlight lang="python">
import os
import subprocess
import pdfplumber


FOLDER_PATH = os.path.dirname(os.path.abspath(file))
# Configuration
FOLDER_PATH = "."  # Current folder
MIN_TEXT_LENGTH = 50  # If text is less than this, we assume it's an image


You can change this to "llama3" or "mistral" if installed
def has_embedded_text(file_path):
    """Checks if a PDF already has text."""
    try:
        with pdfplumber.open(file_path) as pdf:
            full_text = ""
            for page in pdf.pages:
                text = page.extract_text()
                if text:
                    full_text += text
           
            # If we found enough text, return True
            if len(full_text.strip()) > MIN_TEXT_LENGTH:
                return True
    except Exception as e:
        print(f"Error reading {file_path}: {e}")
        return False
    return False


OLLAMA_MODEL = "granite3.3:2b"
def ocr_file(file_path):
    """Runs OCRmyPDF on the file."""
    output_path = file_path.replace(".pdf", "_OCR.pdf")
   
    # Don't re-OCR if the output already exists
    if os.path.exists(output_path):
        print(f"Skipping {file_path} (OCR version already exists)")
        return


---------------------
    print(f"🖼️  Image Detected: Converting {file_path}...")
   
    try:
        # Run the OCR command
        # --force-ocr: Process even if it thinks there is some text (often garbage in scans)
        # --deskew: Straighten crooked scans
        command = [
            "ocrmypdf",
            "--force-ocr",
            "--deskew",
            file_path,
            output_path
        ]
       
        result = subprocess.run(command, capture_output=True, text=True)
       
        if result.returncode == 0:
            print(f"✅ Success: Created {output_path}")
        else:
            print(f"❌ Failed to OCR {file_path}")
            print(result.stderr)
           
    except FileNotFoundError:
        print("❌ Error: 'ocrmypdf' is not installed. Run 'sudo apt install ocrmypdf' first.")


def get_os_creation_date(filepath):
def main():
"""Last resort: Gets OS file creation date in YYMMDD format."""
    print("🔍 Scanning for image-based PDFs...")
try:
    files = [f for f in os.listdir(FOLDER_PATH) if f.lower().endswith(".pdf") and "_OCR" not in f]
timestamp = os.path.getctime(filepath)
   
return datetime.fromtimestamp(timestamp).strftime('%y%m%d')
    count = 0
except:
    for filename in files:
return datetime.now().strftime('%y%m%d')
        file_path = os.path.join(FOLDER_PATH, filename)
       
        if not has_embedded_text(file_path):
            ocr_file(file_path)
            count += 1
           
    if count == 0:
        print("🎉 No image-only PDFs found. All files differ have text!")
    else:
        print(f"\n✨ Processed {count} files.")


def extract_latest_year_heuristic(text):
if __name__ == "__main__":
"""
    main()
Scans for years (2000-2059), including spaced years (2 0 2 4).
</syntaxhighlight>
Returns the HIGHEST year found.
"""
current_year = datetime.now().year
found_years = []


# 1. Standard Years (e.g., &quot;2024&quot;, &quot;2023-2024&quot;)
=== PDF 2 VCF Script ===
matches_standard = re.findall(r&#39;(?&lt;!\d)(20[0-5][0-9])(?!\d)&#39;, text)
Copy the code below into pdf2vcf.py. This creates a bulk VCF file so you can load this into your contacts. <syntaxhighlight lang="bash">python3 pdf2vcf.py</syntaxhighlight><syntaxhighlight lang="python">
if matches_standard:
import os
    found_years.extend([int(y) for y in matches_standard])
import requests
import json
import pdfplumber
import re
from datetime import datetime
import time


# 2. Spaced Years (e.g., &quot;2 0 2 4&quot;)
# --- CONFIGURATION ---
matches_spaced = re.findall(r&#39;(?&lt;!\d)2\s+0\s+[0-5]\s+[0-9](?!\d)&#39;, text)
FOLDER_PATH = os.path.dirname(os.path.abspath(__file__))
if matches_spaced:
OLLAMA_MODEL = "granite3.3:2b"
    for m in matches_spaced:
# ---------------------
        clean_year = int(m.replace(&quot; &quot;, &quot;&quot;))
        found_years.append(clean_year)


if found_years:
def get_timestamp():
     valid_years = [y for y in found_years if y &lt;= current_year + 5]
     """Returns current YYMMDD-HHMMSS"""
   
     return datetime.now().strftime('%y%m%d-%H%M%S')
     if valid_years:
        latest_year = max(valid_years)
        short_year = str(latest_year)[2:]
        return f&quot;{short_year}0101&quot;


return None
def get_short_date():
    """Returns current YYMMDD"""
    return datetime.now().strftime('%y%m%d')


# --- SMART PDF READER ---
def get_smart_pdf_text(filepath):
    """
    Reads PDF pages but SKIPS pages that look like 'Application Forms'.
    Returns the text of the first 2 'valid' resume pages found.
    """
    valid_text = ""
    pages_read = 0
    skip_phrases = [
        "APPLICATION FOR EMPLOYMENT", "OFFICIAL USE ONLY",
        "DO NOT WRITE BELOW THIS LINE", "PERSONAL DATA SHEET",
        "APPLICANT'S SIGNATURE", "FAMILY BACKGROUND"
    ]


def extract_text_from_docx(filepath):
    try:
"""Reads text from .docx files, including tables."""
        with pdfplumber.open(filepath) as pdf:
if not DOCX_AVAILABLE:
            for page in pdf.pages:
return ""
                text = page.extract_text() or ""
try:
                # CHECK: Is this page just a form?
doc = Document(filepath)
                if any(phrase in text.upper() for phrase in skip_phrases):
full_text = []
                    continue
for para in doc.paragraphs:
               
full_text.append(para.text)
                valid_text += text + "\n"
for table in doc.tables:
                pages_read += 1
for row in table.rows:
                if pages_read >= 2: break   
for cell in row.cells:
    except Exception as e:
full_text.append(cell.text)
        print(f"   [ERROR] PDF Read Error: {e}")
return "\n".join(full_text)
        return ""
except Exception as e:
    return valid_text
print(f"[ERROR] Reading DOCX: {e}")
return ""


def clean_text_for_llm(text):
def clean_text_for_llm(text):
clean = " ".join(text.split())
    clean = " ".join(text.split())
# Limit to 4000 chars to prevent choking small models
    return clean[:6000]
return clean[:4000]


def ask_ollama(text):
def parse_name_from_filename(filename):
system_instruction = (
    """
"You are a data extraction assistant. "
    Fallback: Tries to guess the name from a filename like '260101 Kim Ong Diploma.pdf'
"Extract the applicant's Full Name and Background."
    """
"\n\nBackground Extraction Rules (STRICT):\n"
    # Remove extension
"1. MANDATORY: You MUST prefer the Educational Degree over any job title.\n"
    base = os.path.splitext(filename)[0]
"  - Example: If text says 'IT Intern' AND 'Diploma in Information Technology', output 'Diploma in Information Technology'.\n"
   
"  - Example: If text says 'Mechanical Engineering Student', output 'Diploma in Mechanical Engineering' (if listed) or 'Mechanical Engineering'.\n"
    # Regex: Look for 6 digits at start, then text
"2. FORBIDDEN: Do NOT use 'Intern', 'Student', 'Assistant', or 'Worker' as the background unless NO degree is mentioned.\n"
    match = re.search(r'^\d{6}\s+(.*?)\s+(?:Bachelor|Diploma|Certificate|General|Master|PhD|Associate|Engineer|Architect)', base, re.IGNORECASE)
"\nOutput strictly in this format: Name | Background."
    if match:
"\nDo NOT include notes, explanations, or numbered lists."
        return match.group(1).strip()
)
   
    # Weaker Regex: Just take the first 3 words after the date
    match_weak = re.search(r'^\d{6}\s+([A-Za-z-]+\s+[A-Za-z-]+\s?[A-Za-z-]*)', base)
    if match_weak:
        return match_weak.group(1).strip()


prompt = f&quot;Resume Text:\n{text}\n\n{system_instruction}&quot;
    return None


url = &quot;http://localhost:11434/api/generate&quot;
def ask_ollama_extraction(text, filename):
data = {
    """
    &quot;model&quot;: OLLAMA_MODEL,
    Asks LLM to extract specific fields, using the FILENAME as a hint.
    &quot;prompt&quot;: prompt,
    """
    &quot;stream&quot;: False,
    system_instruction = (
    &quot;options&quot;: {
        "You are a Data Extraction Expert. Extract details from the resume.\n"
         &quot;temperature&quot;: 0.1,  
        f"CONTEXT: The file is named '{filename}'. This filename likely contains the correct spelling of the Name and Degree.\n"
         &quot;num_ctx&quot;: 4096
        "\nRULES:\n"
    }
        "1. **Double Check the Name:** If the resume text has OCR errors (e.g., 'K1m 0ng'), use the spelling from the Filename ('Kim Ong').\n"
}
        "2. **Extract:** Full Name, Educational Degree (Short), Email, Phone, and Summary.\n"
        "3. **Summary:** Write a concise 3-sentence summary of their key skills.\n"
        "\nRETURN JSON ONLY:\n"
        "{\n"
        '  "name": "John Doe",\n'
        '  "degree": "BS IT",\n'
         '  "email": "john@email.com",\n'
        '  "phone": "09123456789",\n'
         '  "summary": "Experienced in..."\n'
        "}"
    )


try:
     prompt = f"Resume Text:\n{text}\n\n{system_instruction}"
    # Added timeout to prevent hanging on one file
     response = requests.post(url, json=data, timeout=60)
    response.raise_for_status()
    result = response.json()[&#39;response&#39;].strip()
    return result
except Exception as e:
    print(f&quot;    [Warning] Ollama call failed: {e}&quot;)
    return None


    url = "http://localhost:11434/api/generate"
    data = {
        "model": OLLAMA_MODEL,
        "prompt": prompt,
        "stream": False,
        "format": "json",
        "options": {"temperature": 0.1, "num_ctx": 4096}
    }


def fix_spaced_names(text):
    try:
# Fixes "J O H N" -> "JOHN"
        response = requests.post(url, json=data, timeout=60)
return re.sub(r'(?<=\b[A-Za-z])\s+(?=[A-Za-z]\b)', '', text)
        response.raise_for_status()
        result_json = response.json()['response']
        return json.loads(result_json)
    except Exception as e:
        print(f"    [Warning] AI Extraction failed: {e}")
        return None


def clean_extracted_string(s):
def create_vcard_string(data, creation_date):
# Remove lists (1.), labels (Name:), and fix spacing
    """
s = re.sub(r'^(1.|2.|Name:|Background:|\d\W)', '', s, flags=re.IGNORECASE)
    Formats the data into VCF 3.0 format.
s = fix_spaced_names(s)
    Format: Name Degree YYMMDD (All in First Name field for easy searching)
s = s.split('\n')[0]
    """
s = re.split(r'(?i)note\s*:', s)[0]
    name = data.get("name", "Unknown")
    degree = data.get("degree", "")
    email = data.get("email", "")
    phone = data.get("phone", "")
    summary = data.get("summary", "")


# Truncate to safe filename length
    # Sanitize inputs
if len(s) &gt; 60:
    if not name or name == "Unknown":
    s = s[:60].strip()
        name = "Unknown Candidate"
      
      
return s.strip().title()
    complex_name = f"{name} {degree} {creation_date}".strip()
 
 
def get_name_fallback(text):
"""
If AI returns 'Name' or 'Unknown', this function grabs the
first non-empty line of the resume, which is usually the name.
"""
lines = [line.strip() for line in text.split('\n') if line.strip()]
 
ignore_list = [&#39;resume&#39;, &#39;curriculum vitae&#39;, &#39;cv&#39;, &#39;profile&#39;, &#39;bio&#39;, &#39;page&#39;, &#39;summary&#39;, &#39;objective&#39;, &#39;name&#39;, &#39;contact&#39;]
 
for line in lines:
    lower_line = line.lower()
    if len(line) &lt; 3 or any(w in lower_line for w in ignore_list):
        continue
      
      
     word_count = len(line.split())
     vcf = [
    if word_count &gt; 5: continue # Names rarely have &gt;5 words
        "BEGIN:VCARD",
    if &quot;looking for&quot; in lower_line or &quot;seeking&quot; in lower_line: continue
        "VERSION:3.0",
 
        f"N:;{complex_name};;;",  
    if len(line) &lt; 50 and not re.search(r&#39;[0-9!@#$%^&amp;*()_+={};&quot;&lt;&gt;?]&#39;, line):
        f"FN:{complex_name}",
         print(f&quot;   [Fallback] AI failed. Guessed name from first line: {line}&quot;)
         f"TEL;TYPE=CELL:{phone}",
         return line
        f"EMAIL;TYPE=WORK:{email}",
          
        f"NOTE:{summary} (Extracted via AI)",
return &quot;Unknown Applicant&quot;
         f"REV:{datetime.now().isoformat()}",
         "END:VCARD"
    ]
    return "\n".join(vcf) + "\n"


def process_to_vcf():
    output_filename = f"{get_timestamp()}_Bulk_Import.vcf"
    output_path = os.path.join(FOLDER_PATH, output_filename)
    creation_date = get_short_date()


def process_folder():
    print(f"--- Smart Resume to VCF Exporter ---")
print(f"--- Resume Renamer (Strict Degree Priority + Resilient) ---")
    print(f"Target Output: {output_filename}")
print(f"Working in: {FOLDER_PATH}\n")
 
count_success = 0
count_fail = 0
script_name = os.path.basename(__file__)
 
for filename in os.listdir(FOLDER_PATH):
    # 1. Check Extension
    file_ext = os.path.splitext(filename)[1].lower()
    if filename == script_name:
        continue
      
      
     if file_ext == &#39;.docx&#39; and not DOCX_AVAILABLE:
     count = 0
        continue
      
      
     if file_ext not in [&#39;.pdf&#39;, &#39;.docx&#39;]:
     with open(output_path, "w", encoding="utf-8") as vcf_file:
        continue
       
        for filename in os.listdir(FOLDER_PATH):
            if not filename.lower().endswith(".pdf"):
                continue


    filepath = os.path.join(FOLDER_PATH, filename)
            filepath = os.path.join(FOLDER_PATH, filename)
    text = &quot;&quot;
            print(f"Processing: {filename}...")
   
    # 2. Extract Text
    print(f&quot;Processing: {filename}...&quot;)
    try:
        if file_ext == &#39;.pdf&#39;:
            with pdfplumber.open(filepath) as pdf:
                for i in range(min(2, len(pdf.pages))):
                    text += pdf.pages[i].extract_text() or &quot;&quot;
        elif file_ext == &#39;.docx&#39;:
            text = extract_text_from_docx(filepath)
           
        if len(text) &lt; 50:
            print(f&quot;    [SKIP] Text too short.&quot;)
            count_fail += 1
            continue
           
    except Exception as e:
        print(f&quot;    [ERROR] Reading file: {e}&quot;)
        count_fail += 1
        continue


    # 3. GET DATE
            # 1. Get Text
    date_str = extract_latest_year_heuristic(text)
            text = get_smart_pdf_text(filepath)
    if not date_str:
            if len(text) < 50:
        date_str = get_os_creation_date(filepath)
                print("   [SKIP] Text too short/unreadable.")
        print(f&quot;   [Fallback] Using OS Date: {date_str}&quot;)
                continue


    # 4. GET NAME/BG
            # 2. Extract Data (Passing filename for context)
    # Add a tiny delay to give Ollama a breather between files
            time.sleep(0.5)  
    time.sleep(0.5)
            data = ask_ollama_extraction(clean_text_for_llm(text), filename)
    llm_output = ask_ollama(clean_text_for_llm(text))
   
    name = None
    bg = &quot;General&quot;


    if llm_output:
            if data:
        if &quot;|&quot; in llm_output:
                # 3. Double Check Name (Python Logic Fallback)
            parts = llm_output.split(&#39;|&#39;, 1)
                # If AI gave a bad name, or "Unknown", try to grab it from the filename manually
            name = parts[0].strip()
                ai_name = data.get("name", "")
            bg = parts[1].strip()
                if not ai_name or "unknown" in ai_name.lower() or any(char.isdigit() for char in ai_name):
        elif &quot;\n&quot; in llm_output:
                    fallback_name = parse_name_from_filename(filename)
            lines = [line.strip() for line in llm_output.split(&#39;\n&#39;) if line.strip()]
                    if fallback_name:
            if len(lines) &gt;= 2:
                        print(f"    [Correction] Replaced '{ai_name}' with filename name: '{fallback_name}'")
                name = lines[0]
                        data['name'] = fallback_name
                bg = lines[1]
       
        # --- IMPROVED FALLBACK CHECK ---
        forbidden_names = [&quot;name&quot;, &quot;unknown&quot;, &quot;resume&quot;, &quot;applicant&quot;, &quot;candidate&quot;, &quot;full name&quot;]
        if not name or name.strip().lower() in forbidden_names:
            name = get_name_fallback(text)
        # -------------------------------


        if name:
                # 4. Create VCard Block
            name = clean_extracted_string(name)
                vcard_block = create_vcard_string(data, creation_date)
            bg = clean_extracted_string(bg)
                 vcf_file.write(vcard_block)
           
                print(f"   -> Added: {data.get('name')} ({data.get('degree')})")
            safe_name = re.sub(r&#39;[^\w\s-]&#39;, &#39;&#39;, name)
                count += 1
            safe_bg = re.sub(r&#39;[^\w\s-]&#39;, &#39;&#39;, bg)
           
            new_filename = f&quot;{date_str} {safe_name} {safe_bg}{file_ext}&quot;
            new_filepath = os.path.join(FOLDER_PATH, new_filename)
           
            if filepath != new_filepath:
                 if not os.path.exists(new_filepath):
                    os.rename(filepath, new_filepath)
                    print(f&quot;   -&gt; Renamed: [{new_filename}]&quot;)
                    count_success += 1
                else:
                    print(f&quot;    -&gt; Duplicate: [{new_filename}]&quot;)
             else:
             else:
                 print(&quot;   -&gt; No change.&quot;)
                 print("   -> Failed to extract data.")
        else:
            print(f&quot;    -&gt; AI Format Fail: {llm_output}&quot;)
            count_fail += 1
    else:
        print(&quot;    -&gt; AI returned nothing.&quot;)
        count_fail += 1
 
print(f&quot;\nDone! Renamed: {count_success} | Failed: {count_fail}&quot;)


    print(f"\nDone! Created {output_filename} with {count} contacts.")


if name == "main":
if __name__ == "__main__":
process_folder()
    process_to_vcf()
</pre>
</syntaxhighlight>

Latest revision as of 09:14, 4 February 2026

1. The Problem

Students and applicants rarely follow file naming conventions. You likely have a folder that looks like this:

Resume.pdf

CV_Final_v2.docx

MyResume(1).pdf

john_doe.pdf

This makes sorting by date or qualification impossible without opening every single file.

The Goal: Automatically rename these files based on their content to a standard format:

YYMMDD Name Degree/Background.pdf
Example: 250101 Juan Dela Cruz BS Information Technology.pdf

2. Requirements Checklist

Please ensure you have the following ready before starting.

[ ] Ubuntu 24.04 System.

[ ] Python 3.12+ (Pre-installed on Ubuntu 24.04).

[ ] Ollama installed locally (The AI engine).

[ ] A Small Language Model pulled (e.g., granite3.3:2b or llama3.2).

  • Note: Small models are fast but can make mistakes. The script has logic to catch these, but a human review is always recommended.

[ ] Python Libraries: pdfplumber (for PDFs), python-docx (for Word), requests (to talk to Ollama).

[ ] No Images: The files must have embedded text. This script excludes OCR (Optical Character Recognition) to keep it fast and lightweight. Pure image scans will be skipped.

3. How the Script Works (The Logic)

This script acts as a "Project Manager" that hires two distinct specialists to process each file. It does not blindly ask the AI for everything, as small AIs make mistakes with math and dates.

File Discovery:

    • The script looks for .pdf and .docx files in the folder where the script is located.

Text Extraction:

    • It pulls raw text. If the text is less than 50 characters (likely an image scan), it skips the file.

The Date Specialist (Python Regex):

    • Logic: It scans the text for explicit years (e.g., "2023", "2024").
    • Rule: It ignores the word "Present". Why? If a resume from 2022 says "2022 - Present", treating "Present" as "Today" (2026) would incorrectly date the old resume. We stick to the highest printed number.
    • Output: Sets the date to Jan 1st of the highest year found (e.g., 240101).

The Content Specialist (Ollama AI):

    • Logic: It sends the text to the local AI with strict instructions.
    • Rule 1 (Priority): It looks for a Degree (e.g., "BS IT") first. It is forbidden from using "Intern" or "Student" if a degree is found.
    • Rule 2 (Fallback): If the AI fails to find a name, the script grabs the first line of the document as a fallback.

Sanitization & Renaming:

    • It fixes "Spaced Names" (e.g., J O H N -> John).
    • It ensures the filename isn't too long.
    • It renames the file only if the name doesn't already exist.

4. Installation Guide (Ubuntu 24.04)

Open your terminal (Ctrl+Alt+T) and follow these steps exactly.

Step A: System Update

Ensure your system tools are fresh to avoid installation conflicts.

sudo apt update && sudo apt upgrade -y

Step B: Install Ollama & The Model

Install the Ollama Engine:

  1. curl -fsSL https://ollama.com/install.sh | sh
    

Download the Brain (The Model):

  1. We use granite3.3:2b because it is very fast.
    ollama pull granite3.3:2b
    

Step C: Setup Python Environment

Ubuntu 24.04 requires Virtual Environments (venv) for Python scripts.

Create a Project Folder:

  1. mkdir ~/resume_renamer
    cd ~/resume_renamer
    

Create the Virtual Environment:

  1. python3 -m venv venv
    

Activate the Environment:

  1. source venv/bin/activate
    
    (You should see (venv) at the start of your command line now).

Install Required Libraries:

  1. pip install requests pdfplumber python-docx
    

Step D: Create the Script

Create the python file:

  1. nano rename_resumes.py
    

Paste the Python code provided in the appendix below.

Save and exit: Press Ctrl+O, Enter, then Ctrl+X.

5. Running the Renamer

This script is portable. It works on the files sitting next to it.

Copy the Script: Move the rename_resumes.py file into your folder full of PDFs (e.g., ~/Documents/Student_CVs).

Open Terminal in that folder:

  1. cd ~/Documents/Student_CVs
    

Activate your Python Environment (Point to where you created it):

  1. source ./venv/bin/activate
    

Run the script:

  1. python3 rename_resumes.py
    

6. Common Errors & Troubleshooting

Error / Behavior Why it happens The Fix (Included in Script)
"Intern" instead of "Degree" The Resume had "INTERN" in big bold letters. The script's prompt explicitly forbids "Intern" if a Degree is found.
Wrong Date (e.g., 260101) The resume said "2021-Present" and the script assumed "Present" = 2026. We disabled "Present" logic. It now only trusts explicit numbers (e.g., 2021).
Spaced Names (J O H N) PDF formatting added spaces between letters. A Regex function detects single letters + spaces and collapses them.
Script Freezes Ollama is overwhelmed. We added a 60-second timeout and a 0.5s pause between files.
Skipped Files The PDF is a scanned image (no text). This is intended. You need an OCR tool for these (not included here).

Appendix: The Python Script

Rename Resumes Script

Copy the code below into rename_resumes.py.

# --- IMPROVED FUNCTION: SMART PDF READER (Skips Forms & Signature Pages) ---
def get_smart_pdf_text(filepath):
    """
    Reads PDF pages but SKIPS pages that look like 'Application Forms'.
    Returns the text of the first 2 'valid' resume pages found.
    """
    valid_text = ""
    pages_read = 0
    
    # Phrases that indicate a page is a FORM, not a Resume
    skip_phrases = [
        "APPLICATION FOR EMPLOYMENT", 
        "OFFICIAL USE ONLY", 
        "DO NOT WRITE BELOW THIS LINE",
        "PERSONAL DATA SHEET",
        "APPLICANT'S SIGNATURE",   # Found on Page 2 of your file
        "FAMILY BACKGROUND"        # Found on Page 2 of your file
    ]

    try:
        with pdfplumber.open(filepath) as pdf:
            for page in pdf.pages:
                text = page.extract_text() or ""
                
                # CHECK: Is this page just a form?
                # We check if ANY of the skip phrases appear in the text
                is_form = any(phrase in text.upper() for phrase in skip_phrases)
                
                if is_form:
                    print(f"    [INFO] Skipped a 'Form' page (found key phrase)...")
                    continue  # Skip this page, check the next one
                
                # If not a form, it's likely the resume. Keep it.
                valid_text += text + "\n"
                pages_read += 1
                
                # Stop after finding 2 valid pages of resume content
                if pages_read >= 2:
                    break
                    
    except Exception as e:
        print(f"    [ERROR] PDF Read Error: {e}")
        return ""
        
    return valid_text
# --------------------------------------

Ocr Converter Script

Copy the code below into ocr_converter.py. Of course the Renamer doesnt work with Image PDFs, so you have to convert this. Also this is only as good as the VISION model used.

python3 ocr_converter.py
import os
import subprocess
import pdfplumber

# Configuration
FOLDER_PATH = "."  # Current folder
MIN_TEXT_LENGTH = 50  # If text is less than this, we assume it's an image

def has_embedded_text(file_path):
    """Checks if a PDF already has text."""
    try:
        with pdfplumber.open(file_path) as pdf:
            full_text = ""
            for page in pdf.pages:
                text = page.extract_text()
                if text:
                    full_text += text
            
            # If we found enough text, return True
            if len(full_text.strip()) > MIN_TEXT_LENGTH:
                return True
    except Exception as e:
        print(f"Error reading {file_path}: {e}")
        return False
    return False

def ocr_file(file_path):
    """Runs OCRmyPDF on the file."""
    output_path = file_path.replace(".pdf", "_OCR.pdf")
    
    # Don't re-OCR if the output already exists
    if os.path.exists(output_path):
        print(f"Skipping {file_path} (OCR version already exists)")
        return

    print(f"🖼️  Image Detected: Converting {file_path}...")
    
    try:
        # Run the OCR command
        # --force-ocr: Process even if it thinks there is some text (often garbage in scans)
        # --deskew: Straighten crooked scans
        command = [
            "ocrmypdf", 
            "--force-ocr", 
            "--deskew", 
            file_path, 
            output_path
        ]
        
        result = subprocess.run(command, capture_output=True, text=True)
        
        if result.returncode == 0:
            print(f"✅ Success: Created {output_path}")
        else:
            print(f"❌ Failed to OCR {file_path}")
            print(result.stderr)
            
    except FileNotFoundError:
        print("❌ Error: 'ocrmypdf' is not installed. Run 'sudo apt install ocrmypdf' first.")

def main():
    print("🔍 Scanning for image-based PDFs...")
    files = [f for f in os.listdir(FOLDER_PATH) if f.lower().endswith(".pdf") and "_OCR" not in f]
    
    count = 0
    for filename in files:
        file_path = os.path.join(FOLDER_PATH, filename)
        
        if not has_embedded_text(file_path):
            ocr_file(file_path)
            count += 1
            
    if count == 0:
        print("🎉 No image-only PDFs found. All files differ have text!")
    else:
        print(f"\n✨ Processed {count} files.")

if __name__ == "__main__":
    main()

PDF 2 VCF Script

Copy the code below into pdf2vcf.py. This creates a bulk VCF file so you can load this into your contacts.

python3 pdf2vcf.py
import os
import requests
import json
import pdfplumber
import re
from datetime import datetime
import time

# --- CONFIGURATION ---
FOLDER_PATH = os.path.dirname(os.path.abspath(__file__))
OLLAMA_MODEL = "granite3.3:2b" 
# ---------------------

def get_timestamp():
    """Returns current YYMMDD-HHMMSS"""
    return datetime.now().strftime('%y%m%d-%H%M%S')

def get_short_date():
    """Returns current YYMMDD"""
    return datetime.now().strftime('%y%m%d')

# --- SMART PDF READER ---
def get_smart_pdf_text(filepath):
    """
    Reads PDF pages but SKIPS pages that look like 'Application Forms'.
    Returns the text of the first 2 'valid' resume pages found.
    """
    valid_text = ""
    pages_read = 0
    skip_phrases = [
        "APPLICATION FOR EMPLOYMENT", "OFFICIAL USE ONLY", 
        "DO NOT WRITE BELOW THIS LINE", "PERSONAL DATA SHEET",
        "APPLICANT'S SIGNATURE", "FAMILY BACKGROUND"
    ]

    try:
        with pdfplumber.open(filepath) as pdf:
            for page in pdf.pages:
                text = page.extract_text() or ""
                # CHECK: Is this page just a form?
                if any(phrase in text.upper() for phrase in skip_phrases):
                    continue 
                
                valid_text += text + "\n"
                pages_read += 1
                if pages_read >= 2: break     
    except Exception as e:
        print(f"    [ERROR] PDF Read Error: {e}")
        return ""
    return valid_text

def clean_text_for_llm(text):
    clean = " ".join(text.split())
    return clean[:6000]

def parse_name_from_filename(filename):
    """
    Fallback: Tries to guess the name from a filename like '260101 Kim Ong Diploma.pdf'
    """
    # Remove extension
    base = os.path.splitext(filename)[0]
    
    # Regex: Look for 6 digits at start, then text
    match = re.search(r'^\d{6}\s+(.*?)\s+(?:Bachelor|Diploma|Certificate|General|Master|PhD|Associate|Engineer|Architect)', base, re.IGNORECASE)
    if match:
        return match.group(1).strip()
    
    # Weaker Regex: Just take the first 3 words after the date
    match_weak = re.search(r'^\d{6}\s+([A-Za-z-]+\s+[A-Za-z-]+\s?[A-Za-z-]*)', base)
    if match_weak:
        return match_weak.group(1).strip()

    return None

def ask_ollama_extraction(text, filename):
    """
    Asks LLM to extract specific fields, using the FILENAME as a hint.
    """
    system_instruction = (
        "You are a Data Extraction Expert. Extract details from the resume.\n"
        f"CONTEXT: The file is named '{filename}'. This filename likely contains the correct spelling of the Name and Degree.\n"
        "\nRULES:\n"
        "1. **Double Check the Name:** If the resume text has OCR errors (e.g., 'K1m 0ng'), use the spelling from the Filename ('Kim Ong').\n"
        "2. **Extract:** Full Name, Educational Degree (Short), Email, Phone, and Summary.\n"
        "3. **Summary:** Write a concise 3-sentence summary of their key skills.\n"
        "\nRETURN JSON ONLY:\n"
        "{\n"
        '  "name": "John Doe",\n'
        '  "degree": "BS IT",\n'
        '  "email": "john@email.com",\n'
        '  "phone": "09123456789",\n'
        '  "summary": "Experienced in..."\n'
        "}"
    )

    prompt = f"Resume Text:\n{text}\n\n{system_instruction}"

    url = "http://localhost:11434/api/generate"
    data = {
        "model": OLLAMA_MODEL,
        "prompt": prompt,
        "stream": False,
        "format": "json", 
        "options": {"temperature": 0.1, "num_ctx": 4096}
    }

    try:
        response = requests.post(url, json=data, timeout=60)
        response.raise_for_status()
        result_json = response.json()['response']
        return json.loads(result_json)
    except Exception as e:
        print(f"    [Warning] AI Extraction failed: {e}")
        return None

def create_vcard_string(data, creation_date):
    """
    Formats the data into VCF 3.0 format.
    Format: Name Degree YYMMDD (All in First Name field for easy searching)
    """
    name = data.get("name", "Unknown")
    degree = data.get("degree", "")
    email = data.get("email", "")
    phone = data.get("phone", "")
    summary = data.get("summary", "")

    # Sanitize inputs
    if not name or name == "Unknown":
        name = "Unknown Candidate"
    
    complex_name = f"{name} {degree} {creation_date}".strip()
    
    vcf = [
        "BEGIN:VCARD",
        "VERSION:3.0",
        f"N:;{complex_name};;;", 
        f"FN:{complex_name}",
        f"TEL;TYPE=CELL:{phone}",
        f"EMAIL;TYPE=WORK:{email}",
        f"NOTE:{summary} (Extracted via AI)",
        f"REV:{datetime.now().isoformat()}",
        "END:VCARD"
    ]
    return "\n".join(vcf) + "\n"

def process_to_vcf():
    output_filename = f"{get_timestamp()}_Bulk_Import.vcf"
    output_path = os.path.join(FOLDER_PATH, output_filename)
    creation_date = get_short_date() 

    print(f"--- Smart Resume to VCF Exporter ---")
    print(f"Target Output: {output_filename}")
    
    count = 0
    
    with open(output_path, "w", encoding="utf-8") as vcf_file:
        
        for filename in os.listdir(FOLDER_PATH):
            if not filename.lower().endswith(".pdf"):
                continue

            filepath = os.path.join(FOLDER_PATH, filename)
            print(f"Processing: {filename}...")

            # 1. Get Text
            text = get_smart_pdf_text(filepath)
            if len(text) < 50:
                print("    [SKIP] Text too short/unreadable.")
                continue

            # 2. Extract Data (Passing filename for context)
            time.sleep(0.5) 
            data = ask_ollama_extraction(clean_text_for_llm(text), filename)

            if data:
                # 3. Double Check Name (Python Logic Fallback)
                # If AI gave a bad name, or "Unknown", try to grab it from the filename manually
                ai_name = data.get("name", "")
                if not ai_name or "unknown" in ai_name.lower() or any(char.isdigit() for char in ai_name):
                    fallback_name = parse_name_from_filename(filename)
                    if fallback_name:
                        print(f"    [Correction] Replaced '{ai_name}' with filename name: '{fallback_name}'")
                        data['name'] = fallback_name

                # 4. Create VCard Block
                vcard_block = create_vcard_string(data, creation_date)
                vcf_file.write(vcard_block)
                print(f"    -> Added: {data.get('name')} ({data.get('degree')})")
                count += 1
            else:
                print("    -> Failed to extract data.")

    print(f"\nDone! Created {output_filename} with {count} contacts.")

if __name__ == "__main__":
    process_to_vcf()