This advanced model builds upon the foundation of GLM-4-32B-0414, incorporating specialized training in mathematics, programming, and logical reasoning to improve its analytical abilities. A key innovation in its development is the use of pairwise ranking-based reinforcement learning (RL), which refines the model’s general reasoning skills beyond standard fine-tuning. Despite its relatively compact size of 32 billion parameters, GLM-Z1-32B-0414 demonstrates competitive performance against much larger models like the 671B-parameter DeepSeek-R1 in certain tasks. Evaluations on benchmarks such as AIME 24/25, LiveCodeBench, and GPQA confirm its strong mathematical and logical reasoning capabilities, making it suitable for tackling a wide range of complex real-world problems.