Google has launched VaultGemma, a privacy-focused AI language model with one billion parameters, trained using differential privacy (DP) to keep training data confidential. Developed with new scaling laws in collaboration with DeepMind, VaultGemma integrates DP at the pre-training stage by adding calibrated noise, preventing memorization of data. While ensuring privacy, DP introduces challenges like reduced training stability, larger batch sizes, and higher computation costs. VaultGemma achieves performance comparable to GPT-2 on benchmarks like HellaSwag and TriviaQA. Google highlighted that partial prompts do not reveal training data, though aggregated patterns may provide facts, and further research is needed to close the utility gap.