Skip to content

Releases: bespokelabsai/curator

v0.1.17.post1

30 Jan 17:02
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.1.17...v0.1.17.post1

v0.1.17

28 Jan 22:54
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.1.16...v0.1.17

v0.1.16

21 Jan 19:33
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.1.15...v0.1.16

v0.1.15.post1

15 Jan 16:06
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.1.15...v0.1.15.post1

v0.1.15

14 Jan 22:59
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.1.14...v0.1.15

v0.1.14

07 Jan 02:32
Compare
Choose a tag to compare

What's Changed

  • Fix bug in batch mapping and get right order for outputs. by @madiator in #289
  • refactor: use os.path.join consistently for path handling by @devin-ai-integration in #291
  • Add Anthropic batch and general refactor by @RyanMarten in #243
  • Remove duplicate resource limit by @RyanMarten in #299
  • Merge dev into main by @vutrung96 in #306
  • Re-do docstrings for batch request processors by @devin-ai-integration in #308

Full Changelog: 0.1.13...v0.1.14

0.1.13

23 Dec 07:42
93912ca
Compare
Choose a tag to compare

What's Changed

  1. Fix issues around litellm, to support Gemini Flash Thinking model.
  2. Add support for o1.

Details

  • Ryan marten patch 1 by @RyanMarten in #273
  • Clean ups in llm.py by @madiator in #274
  • Put the examples in respective folders and add requirements.txt everywhere by @madiator in #275
  • Catch catch-all Exception since litellm doesn't throw specific error. by @madiator in #281
  • feat: add o1 model structured output support by @devin-ai-integration in #284
  • Bump to 0.1.13 by @madiator in #285
  • Merge dev into main for 0.1.13 release. by @madiator in #286

Full Changelog: v0.1.12...0.1.13

v0.1.12

17 Dec 06:51
5dbb913
Compare
Choose a tag to compare

What's Changed

  • [curator-viewer] enabled toast instead of alert for copy paste, and fixed streaming toast by @CharlieJCJ in #165
  • Use huggingface modified pickler to fix path-dependent caching by @vutrung96 in #230
  • Change rpm and tpm to have lower default and allow for manual setting by @RyanMarten in #234
  • Various fixes to increase the reliability of batch processing by @vutrung96 in #231
  • Graceful error handling for missing requests by @vutrung96 in #244
  • OpenAIOnline - if api_key missing, directly error out by @CharlieJCJ in #237
  • Increase default values for tpm/rpm, otherwise there is no progress. by @madiator in #245
  • refactor: rename Prompter class to LLM by @devin-ai-integration in #242
  • Rename prompter. Simplify prompt_formatter and add test. by @madiator in #246
  • Raise error on failed responses by @RyanMarten in #251
  • Add a SimpleLLM interface, and update documentation. by @madiator in #255
  • Cool down when hitting rate limit with online processors by @RyanMarten in #256
  • Gemini lower safety constraints by @CharlieJCJ in #259
  • Raise on None response message by @RyanMarten in #262
  • Add metadata dict + cache verification by @GeorgiosSmyrnis in #257
  • Default for all online requests to 10 minutes timeout by @RyanMarten in #265
  • Retry only on "max_length" and "content_filter" finish reason by @RyanMarten in #267
  • Retry on response format failure by @RyanMarten in #266
  • Add prism.js types to dev dependencies by @RyanMarten in #270

New Contributors

Full Changelog: v0.1.11...v0.1.12

v0.1.11

06 Dec 04:47
450e934
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.1.10...v0.1.11

v0.1.10

26 Nov 23:39
6252238
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.1.9.post1...v0.1.10