OpenAI gpt-oss 120B is a 120B open-weight language model released under the Apache 2.0 license. It is well-suited for reasoning and function calling use cases. The model is optimized for deployment on consumer hardware.
The 120B model achieves near-parity with OpenAI o4-mini on core reasoning benchmarks, while running on a single 80GB GPU.
Managed API (MaaS) specifications
View model card in Model Garden
| Model ID | gpt-oss-120b-maas |
|
|---|---|---|
| Launch stage | GA | |
| Supported inputs & outputs |
|
|
| Capabilities |
|
|
| Usage types |
|
|
| Versions |
|
|
| Supported regions | ||
|
Model availability |
|
|
|
ML processing |
|
|
| Limits |
global endpoint:
us-central1:
|
|
| Pricing | See Pricing. | |
Deploy as a self-deployed model
To self-deploy the model, navigate to the gpt-oss 120B model card in the Model Garden console and click Deploy model. For more information about deploying and using partner models, see Deploy a partner model and make prediction requests.