GLM-4-32B-0414

Model Description

GLM-4-32B-0414 is a new generation open-source model in the GLM series, with 32 billion parameters. The model’s performance is comparable to OpenAI’s GPT series and DeepSeek’s V3/R1 series, and it supports very user-friendly local deployment features. GLM-4-32B-Base-0414 is pre-trained on 15T of high-quality data, including a large amount of synthetic data for various reasoning types, laying the foundation for subsequent reinforcement learning extensions. In the post-training phase, in addition to aligning human preferences in dialogue scenarios, the research team also enhanced the model’s performance in instruction following, engineering code, and function calls using techniques such as rejection sampling and reinforcement learning, strengthening the atomic capabilities required for agent tasks. GLM-4-32B-0414 has achieved good results in areas such as engineering code, artifact generation, function calls, search-based question answering, and report generation, with some benchmark metrics approaching or even surpassing the levels of larger models like GPT-4o and DeepSeek-V3-0324 (671B).

Description Ends

Recommend Models

DeepSeek-R1

Performance on par with OpenAI-o1, Fully open-source model & technical report, Code and models are released under the MIT License: Distill & commercialize freely.

o4-mini-2025-04-16

Our faster, cost-efficient reasoning model delivering strong performance on math, coding and vision

o3-2025-04-16

Our most powerful reasoning model with leading performance on coding, math, science, and vision