Migration Guide
Migrating to 0.2
openai v1 is a total rewrite of the library with many breaking changes. For example, the inference requires instantiating a client, instead of using a global class method.
Therefore, some changes are required for users of pyautogen<0.2.
api_base->base_url,request_timeout->timeoutinllm_configandconfig_list.max_retry_periodandretry_wait_timeare deprecated.max_retriescan be set for each client.- MathChat is unsupported until it is tested in future release.
autogen.Completionandautogen.ChatCompletionare deprecated. The essential functionalities are moved toautogen.OpenAIWrapper:
from autogen import OpenAIWrapper
client = OpenAIWrapper(config_list=config_list)
response = client.create(messages=[{"role": "user", "content": "2+2="}])
print(client.extract_text_or_completion_object(response))
- Inference parameter tuning and inference logging features are updated:
import autogen.runtime_logging
# Start logging
autogen.runtime_logging.start()
# Stop logging
autogen.runtime_logging.stop()
Checkout Logging documentation and Logging example notebook to learn more.
Inference parameter tuning can be done via flaml.tune.
seedin autogen is renamed intocache_seedto accommodate the newly addedseedparam in openai chat completion api.use_cacheis removed as a kwarg inOpenAIWrapper.create()for being automatically decided bycache_seed: int | None. The difference between autogen'scache_seedand openai'sseedis that:- autogen uses local disk cache to guarantee the exactly same output is produced for the same input and when cache is hit, no openai api call will be made.
- openai's
seedis a best-effort deterministic sampling with no guarantee of determinism. When using openai'sseedwithcache_seedset to None, even for the same input, an openai api call will be made and there is no guarantee for getting exactly the same output.