PII & De-Identification
Collection
Models for extracting PII entities and de-identifying clinical text, with support for HIPAA and GDPR compliance. • 310 items • Updated • 36
How to use OpenMed/OpenMed-PII-French-NomicMed-Large-395M-v1 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("token-classification", model="OpenMed/OpenMed-PII-French-NomicMed-Large-395M-v1") # Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("OpenMed/OpenMed-PII-French-NomicMed-Large-395M-v1")
model = AutoModelForTokenClassification.from_pretrained("OpenMed/OpenMed-PII-French-NomicMed-Large-395M-v1")French PII Detection Model | 395M Parameters | Open Source
OpenMed-PII-French-NomicMed-Large-395M-v1 is a transformer-based token classification model fine-tuned for Personally Identifiable Information (PII) detection in French text. This model identifies and classifies 54 types of sensitive information including names, addresses, social security numbers, medical record numbers, and more.
Evaluated on the French subset of AI4Privacy dataset:
| Metric | Score |
|---|---|
| Micro F1 | 0.9722 |
| Precision | 0.9704 |
| Recall | 0.9740 |
| Macro F1 | 0.9654 |
| Weighted F1 | 0.9724 |
| Accuracy | 0.9962 |
| Rank | Model | F1 | Precision | Recall |
|---|---|---|---|---|
| 1 | OpenMed-PII-French-SuperClinical-Large-434M-v1 | 0.9797 | 0.9790 | 0.9804 |
| 2 | OpenMed-PII-French-EuroMed-210M-v1 | 0.9762 | 0.9747 | 0.9777 |
| 3 | OpenMed-PII-French-ClinicalBGE-568M-v1 | 0.9733 | 0.9718 | 0.9748 |
| 4 | OpenMed-PII-French-BigMed-Large-560M-v1 | 0.9733 | 0.9716 | 0.9749 |
| 5 | OpenMed-PII-French-SnowflakeMed-Large-568M-v1 | 0.9728 | 0.9711 | 0.9745 |
| 6 | OpenMed-PII-French-SuperMedical-Large-355M-v1 | 0.9728 | 0.9712 | 0.9744 |
| 7 | OpenMed-PII-French-NomicMed-Large-395M-v1 | 0.9722 | 0.9704 | 0.9740 |
| 8 | OpenMed-PII-French-mClinicalE5-Large-560M-v1 | 0.9713 | 0.9697 | 0.9729 |
| 9 | OpenMed-PII-French-mSuperClinical-Base-279M-v1 | 0.9674 | 0.9662 | 0.9687 |
| 10 | OpenMed-PII-French-ClinicalBGE-Large-335M-v1 | 0.9668 | 0.9644 | 0.9692 |
This model detects 54 PII entity types organized into categories:
| Entity | Description |
|---|---|
ACCOUNTNAME |
Accountname |
BANKACCOUNT |
Bankaccount |
BIC |
Bic |
BITCOINADDRESS |
Bitcoinaddress |
CREDITCARD |
Creditcard |
CREDITCARDISSUER |
Creditcardissuer |
CVV |
Cvv |
ETHEREUMADDRESS |
Ethereumaddress |
IBAN |
Iban |
IMEI |
Imei |
| ... | and 12 more |
| Entity | Description |
|---|---|
AGE |
Age |
DATEOFBIRTH |
Dateofbirth |
EYECOLOR |
Eyecolor |
FIRSTNAME |
Firstname |
GENDER |
Gender |
HEIGHT |
Height |
LASTNAME |
Lastname |
MIDDLENAME |
Middlename |
OCCUPATION |
Occupation |
PREFIX |
Prefix |
| ... | and 1 more |
| Entity | Description |
|---|---|
EMAIL |
|
PHONE |
Phone |
| Entity | Description |
|---|---|
BUILDINGNUMBER |
Buildingnumber |
CITY |
City |
COUNTY |
County |
GPSCOORDINATES |
Gpscoordinates |
ORDINALDIRECTION |
Ordinaldirection |
SECONDARYADDRESS |
Secondaryaddress |
STATE |
State |
STREET |
Street |
ZIPCODE |
Zipcode |
| Entity | Description |
|---|---|
JOBDEPARTMENT |
Jobdepartment |
JOBTITLE |
Jobtitle |
ORGANIZATION |
Organization |
| Entity | Description |
|---|---|
AMOUNT |
Amount |
CURRENCY |
Currency |
CURRENCYCODE |
Currencycode |
CURRENCYNAME |
Currencyname |
CURRENCYSYMBOL |
Currencysymbol |
| Entity | Description |
|---|---|
DATE |
Date |
TIME |
Time |
from transformers import pipeline
# Load the PII detection pipeline
ner = pipeline("ner", model="OpenMed/OpenMed-PII-French-NomicMed-Large-395M-v1", aggregation_strategy="simple")
text = """
Patient Jean Martin (né le 15/03/1985, NSS: 1 85 03 75 108 234 67) a été vu aujourd'hui.
Contact: jean.martin@email.fr, Téléphone: 06 12 34 56 78.
Adresse: 123 Avenue des Champs-Élysées, 75008 Paris.
"""
entities = ner(text)
for entity in entities:
print(f"{entity['entity_group']}: {entity['word']} (score: {entity['score']:.3f})")
def redact_pii(text, entities, placeholder='[REDACTED]'):
"""Replace detected PII with placeholders."""
# Sort entities by start position (descending) to preserve offsets
sorted_entities = sorted(entities, key=lambda x: x['start'], reverse=True)
redacted = text
for ent in sorted_entities:
redacted = redacted[:ent['start']] + f"[{ent['entity_group']}]" + redacted[ent['end']:]
return redacted
# Apply de-identification
redacted_text = redact_pii(text, entities)
print(redacted_text)
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
model_name = "OpenMed/OpenMed-PII-French-NomicMed-Large-395M-v1"
model = AutoModelForTokenClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
texts = [
"Patient Jean Martin (né le 15/03/1985, NSS: 1 85 03 75 108 234 67) a été vu aujourd'hui.",
"Contact: jean.martin@email.fr, Téléphone: 06 12 34 56 78.",
]
inputs = tokenizer(texts, return_tensors='pt', padding=True, truncation=True)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, dim=-1)
Important: This model is intended as an assistive tool, not a replacement for human review.
@misc{openmed-pii-2026,
title = {OpenMed-PII-French-NomicMed-Large-395M-v1: French PII Detection Model},
author = {OpenMed Science},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/OpenMed/OpenMed-PII-French-NomicMed-Large-395M-v1}
}
Base model
answerdotai/ModernBERT-large