You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

β˜• CoffeeChatAI

CoffeeChatAI is a lightweight AI based Model English language model.
It was developed and customized by Adrian Charles and his team Bluckhut as a side project, with the goal of making an accessible, branded chatbot-style AI for text generation.

CoffeeChatAI can be used to generate text for creative, academic, or entertainment purposes.


Model Details

  • Developed by: Adrian Charles & Team Bluckhut
  • Base model: (https://huggingface.co/topboykrepta/coffechatai)
  • Model type: Transformer-based causal language model
  • Language: English
  • Parameters: ~1.6M
  • License: Apache 2.0
  • Description:
    CoffeeChatAI is a branded and documented, designed to serve as the backbone for the CoffeeChat project.
    It is compact, fast, and intended for experimentation and educational side projects.

Intended Uses

βœ… Possible Applications

  • Writing assistance (autocompletion, idea generation, grammar help)
  • Creative text generation (stories, poetry, dialogue)
  • Entertainment (chatbots, games, roleplay scenarios)
  • Educational demos (exploring transformers, model compression, and fine-tuning)

⚠️ Limitations & Risks

  • May produce biased, offensive, or inaccurate content
  • Not suitable for tasks requiring factual correctness (e.g., news, medical, legal advice)
  • Small size = weaker performance compared to larger GPT-2/GPT-3 models

How to Use

You can load and use CoffeeChatAI directly with Hugging Face transformers:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("topboykrepta/CoffeeChatAI")
model = AutoModelForCausalLM.from_pretrained("topboykrepta/CoffeeChatAI")

inputs = tokenizer("Hello, I am CoffeeChat AI,", return_tensors="pt")
outputs = model.generate(**inputs, max_length=30, num_return_sequences=2, do_sample=True)

for i, output in enumerate(outputs):
    print(f"Generated {i+1}: {tokenizer.decode(output, skip_special_tokens=True)}")

---

from transformers import pipeline

generator = pipeline("text-generation", model="topboykrepta/CoffeeChatAI")
print(generator("Hello, I am CoffeeChat AI,", max_length=30, num_return_sequences=2))

---

Or with the Hugging Face pipeline:

If you use this model, please cite:

@misc{CoffeeChatAI2025,
  author = {Adrian Charles and Team Bluckhut},
  title = {CoffeeChatAI: A Tiny Chat Applications},
  year = {2025},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/topboykrepta/CoffeeChatAI}},
}


---
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ 2 Ask for provider support

Dataset used to train topboykrepta/CoffeeChatAI

Evaluation results