[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-keras-team--keras":3,"tool-keras-team--keras":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":68,"owner_company":78,"owner_location":78,"owner_email":79,"owner_twitter":78,"owner_website":80,"owner_url":81,"languages":82,"stars":91,"forks":92,"last_commit_at":93,"license":94,"difficulty_score":23,"env_os":95,"env_gpu":96,"env_ram":97,"env_deps":98,"category_tags":106,"github_topics":107,"view_count":116,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":117,"updated_at":118,"faqs":119,"releases":148},3364,"keras-team\u002Fkeras","keras","Deep Learning for humans","Keras 是一个专为人类设计的深度学习框架，旨在让构建和训练神经网络变得简单直观。它解决了开发者在不同深度学习后端之间切换困难、模型开发效率低以及难以兼顾调试便捷性与运行性能的痛点。\n\n无论是刚入门的学生、专注算法的研究人员，还是需要快速落地产品的工程师，都能通过 Keras 轻松上手。它支持计算机视觉、自然语言处理、音频分析及时间序列预测等多种任务。\n\nKeras 3 的核心亮点在于其独特的“多后端”架构。用户只需编写一套代码，即可灵活选择 TensorFlow、JAX、PyTorch 或 OpenVINO 作为底层运行引擎。这一特性不仅保留了 Keras 一贯的高层易用性，还允许开发者根据需求自由选择：利用 JAX 或 PyTorch 的即时执行模式进行高效调试，或切换至速度最快的后端以获得最高 350% 的性能提升。此外，Keras 具备强大的扩展能力，能无缝从本地笔记本电脑扩展至大规模 GPU 或 TPU 集群，是连接原型开发与生产部署的理想桥梁。","# Keras 3: Deep Learning for Humans\n\nKeras 3 is a multi-backend deep learning framework, with support for JAX, TensorFlow, PyTorch, and OpenVINO (for inference-only).\nEffortlessly build and train models for computer vision, natural language processing, audio processing,\ntimeseries forecasting, recommender systems, etc.\n\n- **Accelerated model development**: Ship deep learning solutions faster thanks to the high-level UX of Keras\nand the availability of easy-to-debug runtimes like PyTorch or JAX eager execution.\n- **State-of-the-art performance**: By picking the backend that is the fastest for your model architecture (often JAX!),\nleverage speedups ranging from 20% to 350% compared to other frameworks. [Benchmark here](https:\u002F\u002Fkeras.io\u002Fgetting_started\u002Fbenchmarks\u002F).\n- **Datacenter-scale training**: Scale confidently from your laptop to large clusters of GPUs or TPUs.\n\nJoin nearly three million developers, from burgeoning startups to global enterprises, in harnessing the power of Keras 3.\n\n\n## Installation\n\n### Install with pip\n\nKeras 3 is available on PyPI as `keras`. Note that Keras 2 remains available as the `tf-keras` package.\n\n1. Install `keras`:\n\n```\npip install keras --upgrade\n```\n\n2. Install backend package(s).\n\nTo use `keras`, you should also install the backend of choice: `tensorflow`, `jax`, or `torch`. Additionally,\nThe `openvino` backend is available with support for model inference only.\n\n### Local installation\n\n#### Minimal installation\n\nKeras 3 is compatible with Linux and macOS systems. For Windows users, we recommend using WSL2 to run Keras.\nTo install a local development version:\n\n1. Install dependencies:\n\n```\npip install -r requirements.txt\n```\n\n2. Run installation command from the root directory.\n\n```\npython pip_build.py --install\n```\n\n3. Run API generation script when creating PRs that update `keras_export` public APIs:\n\n```\n.\u002Fshell\u002Fapi_gen.sh\n```\n\n## Backend Compatibility Table\n\nThe following table lists the minimum supported versions of each backend for the latest stable release of Keras (v3.x):\n\n| Backend    | Minimum Supported Version |\n|------------|---------------------------|\n| TensorFlow | 2.16.1                    |\n| JAX        | 0.4.20                    |\n| PyTorch    | 2.1.0                     |\n| OpenVINO   | 2025.3.0                  |\n\n#### Adding GPU support\n\nThe `requirements.txt` file will install a CPU-only version of TensorFlow, JAX, and PyTorch. For GPU support, we also\nprovide a separate `requirements-{backend}-cuda.txt` for TensorFlow, JAX, and PyTorch. These install all CUDA\ndependencies via `pip` and expect a NVIDIA driver to be pre-installed. We recommend a clean Python environment for each\nbackend to avoid CUDA version mismatches. As an example, here is how to create a JAX GPU environment with `conda`:\n\n```shell\nconda create -y -n keras-jax python=3.10\nconda activate keras-jax\npip install -r requirements-jax-cuda.txt\npython pip_build.py --install\n```\n\n## Configuring your backend\n\nYou can export the environment variable `KERAS_BACKEND` or you can edit your local config file at `~\u002F.keras\u002Fkeras.json`\nto configure your backend. Available backend options are: `\"tensorflow\"`, `\"jax\"`, `\"torch\"`, `\"openvino\"`. Example:\n\n```\nexport KERAS_BACKEND=\"jax\"\n```\n\nIn Colab, you can do:\n\n```python\nimport os\nos.environ[\"KERAS_BACKEND\"] = \"jax\"\n\nimport keras\n```\n\n**Note:** The backend must be configured before importing `keras`, and the backend cannot be changed after\nthe package has been imported.\n\n**Note:** The OpenVINO backend is an inference-only backend, meaning it is designed only for running model\npredictions using `model.predict()` method.\n\n## Backwards compatibility\n\nKeras 3 is intended to work as a drop-in replacement for `tf.keras` (when using the TensorFlow backend). Just take your\nexisting `tf.keras` code, make sure that your calls to `model.save()` are using the up-to-date `.keras` format, and you're\ndone.\n\nIf your `tf.keras` model does not include custom components, you can start running it on top of JAX or PyTorch immediately.\n\nIf it does include custom components (e.g. custom layers or a custom `train_step()`), it is usually possible to convert it\nto a backend-agnostic implementation in just a few minutes.\n\nIn addition, Keras models can consume datasets in any format, regardless of the backend you're using:\nyou can train your models with your existing `tf.data.Dataset` pipelines or PyTorch `DataLoaders`.\n\n## Why use Keras 3?\n\n- Run your high-level Keras workflows on top of any framework -- benefiting at will from the advantages of each framework,\ne.g. the scalability and performance of JAX or the production ecosystem options of TensorFlow.\n- Write custom components (e.g. layers, models, metrics) that you can use in low-level workflows in any framework.\n    - You can take a Keras model and train it in a training loop written from scratch in native TF, JAX, or PyTorch.\n    - You can take a Keras model and use it as part of a PyTorch-native `Module` or as part of a JAX-native model function.\n- Make your ML code future-proof by avoiding framework lock-in.\n- As a PyTorch user: get access to power and usability of Keras, at last!\n- As a JAX user: get access to a fully-featured, battle-tested, well-documented modeling and training library.\n\n\nRead more in the [Keras 3 release announcement](https:\u002F\u002Fkeras.io\u002Fkeras_3\u002F).\n","# Keras 3：面向人类的深度学习\n\nKeras 3 是一个支持多后端的深度学习框架，兼容 JAX、TensorFlow、PyTorch 和 OpenVINO（仅用于推理）。  \n您可以轻松构建和训练用于计算机视觉、自然语言处理、音频处理、时间序列预测、推荐系统等领域的模型。\n\n- **加速模型开发**：借助 Keras 的高级用户体验以及 PyTorch 或 JAX 的即时执行等易于调试的运行时环境，您可以更快地交付深度学习解决方案。\n- **行业领先性能**：通过为您的模型架构选择最快的后端（通常是 JAX），相比其他框架可获得 20% 至 350% 的性能提升。[基准测试链接](https:\u002F\u002Fkeras.io\u002Fgetting_started\u002Fbenchmarks\u002F)。\n- **数据中心级训练**：无论是在笔记本电脑上还是在大型 GPU 或 TPU 集群中，您都可以自信地扩展训练规模。\n\n近三百万开发者，从初创企业到全球性企业，都在使用 Keras 3 来释放深度学习的强大潜力。\n\n\n## 安装\n\n### 使用 pip 安装\n\nKeras 3 已在 PyPI 上以 `keras` 的名称发布。请注意，Keras 2 仍可通过 `tf-keras` 包获取。\n\n1. 安装 `keras`：\n\n```\npip install keras --upgrade\n```\n\n2. 安装后端包。\n\n要使用 `keras`，您还需要安装所选的后端：`tensorflow`、`jax` 或 `torch`。此外，`openvino` 后端仅支持模型推理。\n\n### 本地安装\n\n#### 最小化安装\n\nKeras 3 兼容 Linux 和 macOS 系统。对于 Windows 用户，我们建议使用 WSL2 来运行 Keras。要安装本地开发版本：\n\n1. 安装依赖项：\n\n```\npip install -r requirements.txt\n```\n\n2. 在根目录下运行安装命令：\n\n```\npython pip_build.py --install\n```\n\n3. 当您提交更新 `keras_export` 公开 API 的 PR 时，请运行 API 生成脚本：\n\n```\n.\u002Fshell\u002Fapi_gen.sh\n```\n\n## 后端兼容性表\n\n下表列出了 Keras 最新稳定版本 (v3.x) 对各后端的最低支持版本：\n\n| 后端        | 最低支持版本 |\n|-------------|--------------|\n| TensorFlow  | 2.16.1       |\n| JAX         | 0.4.20       |\n| PyTorch     | 2.1.0        |\n| OpenVINO    | 2025.3.0     |\n\n#### 添加 GPU 支持\n\n`requirements.txt` 文件将安装仅支持 CPU 的 TensorFlow、JAX 和 PyTorch 版本。若需 GPU 支持，我们还提供了针对 TensorFlow、JAX 和 PyTorch 的单独 `requirements-{backend}-cuda.txt` 文件。这些文件会通过 `pip` 安装所有 CUDA 依赖项，并假定已预先安装 NVIDIA 驱动程序。我们建议为每个后端创建独立的 Python 环境，以避免 CUDA 版本不匹配。以下是以 `conda` 创建 JAX GPU 环境的示例：\n\n```shell\nconda create -y -n keras-jax python=3.10\nconda activate keras-jax\npip install -r requirements-jax-cuda.txt\npython pip_build.py --install\n```\n\n## 配置后端\n\n您可以通过导出环境变量 `KERAS_BACKEND`，或编辑本地配置文件 `~\u002F.keras\u002Fkeras.json` 来配置后端。可用的后端选项有：“tensorflow”、“jax”、“torch”、“openvino”。示例如下：\n\n```\nexport KERAS_BACKEND=\"jax\"\n```\n\n在 Colab 中，您可以这样做：\n\n```python\nimport os\nos.environ[\"KERAS_BACKEND\"] = \"jax\"\n\nimport keras\n```\n\n**注意**：必须在导入 `keras` 之前配置后端，且一旦导入该包后，便无法再更改后端。\n\n**注意**：OpenVINO 后端仅为推理后端，即它仅用于通过 `model.predict()` 方法进行模型预测。\n\n## 向后兼容性\n\nKeras 3 旨在作为 `tf.keras` 的直接替代品（当使用 TensorFlow 后端时）。只需将您现有的 `tf.keras` 代码迁移过来，确保调用 `model.save()` 时使用最新的 `.keras` 格式，即可完成迁移。\n\n如果您的 `tf.keras` 模型不包含自定义组件，您可以立即在 JAX 或 PyTorch 后端上运行它。\n\n如果模型包含自定义组件（例如自定义层或自定义 `train_step()`），通常只需几分钟即可将其转换为与后端无关的实现。\n\n此外，Keras 模型可以消费任何格式的数据集，无论您使用何种后端：您可以继续使用现有的 `tf.data.Dataset` 数据管道或 PyTorch 的 `DataLoader` 来训练模型。\n\n## 为什么使用 Keras 3？\n\n- 您可以在任何框架之上运行高级别的 Keras 工作流，从而灵活利用各框架的优势，例如 JAX 的可扩展性和高性能，或 TensorFlow 的生产级生态系统选项。\n- 您可以编写自定义组件（如层、模型、指标），并在任何框架的底层工作流中使用它们。\n  - 您可以将 Keras 模型放入用原生 TF、JAX 或 PyTorch 手写训练循环中进行训练。\n  - 您也可以将 Keras 模型作为 PyTorch 原生 `Module` 的一部分，或作为 JAX 原生模型函数的一部分来使用。\n- 通过避免框架锁定，使您的机器学习代码更具未来适应性。\n- 对于 PyTorch 用户：终于可以享受到 Keras 的强大功能和易用性！\n- 对于 JAX 用户：终于可以使用功能齐全、久经考验且文档完善的建模与训练库。\n\n\n更多信息请参阅 [Keras 3 发布公告](https:\u002F\u002Fkeras.io\u002Fkeras_3\u002F)。","# Keras 3 快速上手指南\n\nKeras 3 是一个支持多后端（JAX、TensorFlow、PyTorch、OpenVINO）的深度学习框架，旨在让开发者能够轻松构建和训练各类模型，同时享受不同后端带来的性能优势。\n\n## 环境准备\n\n*   **操作系统**：兼容 Linux 和 macOS。Windows 用户推荐使用 WSL2。\n*   **Python 版本**：建议 Python 3.9+（示例中使用 3.10）。\n*   **前置依赖**：\n    *   需预先安装所选后端的底层框架：`tensorflow`、`jax` 或 `torch`。\n    *   **GPU 支持**：若需使用 GPU，请确保已安装对应的 NVIDIA 驱动，并建议使用独立的虚拟环境（如 conda）以避免 CUDA 版本冲突。\n*   **国内加速**：推荐使用国内镜像源加速安装（如清华源、阿里源）。\n\n## 安装步骤\n\n### 1. 基础安装（CPU 版本）\n\n使用 pip 安装 Keras 3 及你选择的后端框架。以下以安装 `jax` 后端为例（也可替换为 `tensorflow` 或 `torch`）：\n\n```bash\n# 推荐使用国内镜像源加速\npip install keras -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple --upgrade\npip install jax -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n> **注意**：Keras 2 用户请注意，旧版本现在以 `tf-keras` 包名提供，新版包名为 `keras`。\n\n### 2. GPU 版本安装（可选）\n\n若需启用 GPU 加速，建议创建干净的 conda 环境并安装包含 CUDA 依赖的要求文件。以下以 JAX GPU 为例：\n\n```shell\nconda create -y -n keras-jax python=3.10\nconda activate keras-jax\n\n# 安装包含 CUDA 依赖的 requirements 文件（需自行获取或参考官方仓库）\npip install -r requirements-jax-cuda.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 从源码安装本地开发版本（如果是从 GitHub clone 的项目）\npython pip_build.py --install\n```\n\n## 基本使用\n\n### 配置后端\n\n在导入 `keras` 之前，必须通过环境变量指定后端。一旦导入，后端将无法更改。\n\n**方法一：命令行设置（推荐）**\n\n```bash\nexport KERAS_BACKEND=\"jax\"\n```\n\n**方法二：代码中设置（适用于 Colab 或脚本）**\n\n```python\nimport os\nos.environ[\"KERAS_BACKEND\"] = \"jax\"  # 可选：\"tensorflow\", \"torch\", \"openvino\"\n\nimport keras\n```\n\n### 快速示例：构建并训练一个简单的模型\n\n以下示例展示了一个通用的工作流程，无论后端如何，代码结构保持一致：\n\n```python\nimport os\nos.environ[\"KERAS_BACKEND\"] = \"jax\" # 确保在导入 keras 前设置\n\nimport keras\nfrom keras import layers, models\n\n# 1. 定义模型\nmodel = models.Sequential([\n    layers.Input(shape=(784,)),\n    layers.Dense(64, activation=\"relu\"),\n    layers.Dropout(0.5),\n    layers.Dense(10, activation=\"softmax\")\n])\n\n# 2. 编译模型\nmodel.compile(\n    optimizer=\"adam\",\n    loss=\"sparse_categorical_crossentropy\",\n    metrics=[\"accuracy\"]\n)\n\n# 3. 准备数据 (此处使用随机数据演示，实际使用中可替换为 tf.data 或 PyTorch DataLoader)\nimport numpy as np\nx_train = np.random.random((1000, 784))\ny_train = np.random.randint(0, 10, (1000,))\nx_test = np.random.random((200, 784))\ny_test = np.random.randint(0, 10, (200,))\n\n# 4. 训练模型\nmodel.fit(x_train, y_train, epochs=5, batch_size=32, validation_split=0.2)\n\n# 5. 评估模型\nscore = model.evaluate(x_test, y_test, verbose=0)\nprint(f\"Test loss: {score[0]}, Test accuracy: {score[1]}\")\n```\n\n### 迁移提示\n如果你已有 `tf.keras` 代码，Keras 3 通常可以直接作为替代品运行。只需确保 `model.save()` 使用新的 `.keras` 格式即可。若包含自定义组件，稍作调整即可实现跨后端运行。","某初创医疗影像团队需要在有限算力下快速迭代肺部结节检测模型，并尝试不同后端以优化推理速度。\n\n### 没有 keras 时\n- 团队若需从 TensorFlow 切换至更快的 JAX 后端，必须重写大量底层代码，迁移成本极高且容易引入 Bug。\n- 调试复杂的自定义网络层时，缺乏类似 PyTorch 的动态执行机制，定位错误耗时费力，严重拖慢研发节奏。\n- 面对不同的部署环境（如云端 TPU 或本地 CPU），需要维护多套训练脚本，难以实现从笔记本到数据中心的无缝扩展。\n- 模型性能调优受限，无法灵活选择针对特定架构最快的后端，导致推理延迟居高不下，影响临床辅助诊断效率。\n\n### 使用 keras 后\n- 借助 Keras 的多后端支持，团队仅需配置环境变量即可在 TensorFlow、JAX 和 PyTorch 间自由切换，无需修改任何模型代码。\n- 利用 JAX 或 PyTorch 的即时执行模式进行开发，研究人员能实时查看中间变量，将模型调试时间缩短了 50% 以上。\n- 同一套代码可直接从开发人员的笔记本电脑平滑扩展至集群级 GPU\u002FTPU 训练，大幅降低了运维复杂度。\n- 通过基准测试发现 JAX 后端对该卷积架构加速效果显著，推理速度提升超过 200%，成功满足了实时诊断的低延迟要求。\n\nKeras 让团队摆脱了框架绑定的束缚，以最低的成本实现了开发效率与模型性能的双重飞跃。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fkeras-team_keras_6e1be523.png","keras-team","Keras","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fkeras-team_dd76ba2a.jpg",null,"keras-users@googlegroups.com","https:\u002F\u002Fkeras.io\u002F","https:\u002F\u002Fgithub.com\u002Fkeras-team",[83,87],{"name":84,"color":85,"percentage":86},"Python","#3572A5",100,{"name":88,"color":89,"percentage":90},"Shell","#89e051",0,63927,19744,"2026-04-04T15:24:37","Apache-2.0","Linux, macOS, Windows (需通过 WSL2)","非必需（支持 CPU）。若需 GPU 加速，需要 NVIDIA GPU 并预装 NVIDIA 驱动，具体显存大小和 CUDA 版本取决于所选后端（TensorFlow\u002FJAX\u002FPyTorch）及模型规模，文中未指定统一最低要求。","未说明",{"notes":99,"python":100,"dependencies":101},"Keras 3 是多后端框架，使用前必须安装并配置其中一个后端（TensorFlow、JAX、PyTorch 或仅推理的 OpenVINO）。需在导入 keras 包之前通过环境变量 KERAS_BACKEND 或配置文件指定后端，导入后不可更改。建议使用独立的 Python 环境（如 conda）以避免不同后端的 CUDA 依赖冲突。Windows 用户推荐使用 WSL2。","3.10 (示例中提及，具体最低版本未在文本中明确限制)",[102,103,104,105],"tensorflow>=2.16.1","jax>=0.4.20","torch>=2.1.0","openvino>=2025.3.0",[13,51,54],[108,109,110,111,112,113,114,115],"deep-learning","tensorflow","neural-networks","machine-learning","data-science","python","jax","pytorch",10,"2026-03-27T02:49:30.150509","2026-04-06T02:33:20.498876",[120,125,130,135,140,144],{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},15701,"为什么保存并重新加载 Keras 模型后，预测结果变得随机或像未训练的模型？","这通常由随机种子未固定或数据预处理不一致导致。解决方案包括：\n1. 固定所有随机源：设置 numpy、tensorflow 和 keras 的随机种子（例如 np.random.seed(10), tf.random.set_seed(10), keras.utils.set_random_seed(10)）。\n2. 确保加载模型进行评估时，数据预处理步骤（如图像归一化除以 255.0）与训练时完全一致。\n3. 尝试使用 SaveModel 格式而非默认的 .h5 格式保存模型，因为某些版本中权重可能在 .h5 保存时被重新初始化。\n4. 检查导入语句，统一使用 `import tensorflow.keras as k` 或 `from keras...`，避免混合导入导致状态不一致。","https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fissues\u002F4875",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},15702,"Keras 是否支持多 GPU 训练？如何实现？","Keras 支持多 GPU 训练。常见做法是使用 `multi_gpu_model` 包装器或将模型并行化。\n关键注意事项：\n1. 训练时应使用并行模型（parallel model）调用 fit()，但保存权重或进行推理时应使用串行模型（serial model）。\n2. 避免直接使用 `save_weights_only=False` 保存整个模型，这可能导致权重顺序混乱。建议仅保存权重，或使用自定义回调每 N 个 epoch 保存模板模型。\n3. 如果遇到问题，可以尝试直接使用底层的 multi-gpu 实现代码（如 keras_experiments 中的实现），训练后将权重加载到单 GPU 模型中进行推理。","https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fissues\u002F2436",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},15703,"如何在 Keras 中实现“实时”循环神经网络，即逐个时间步输入数据并保留内部状态？","要实现逐个时间步预测并保留 RNN\u002FLSTM 的内部状态：\n1. 在较新版本（>1.0）中，调用 `model.predict()` 时必须显式指定 `batch_size` 参数，否则可能默认行为导致错误。\n2. 确保输入数据的形状正确，通常需要保持 batch 维度。\n3. 注意区分 batch size（批次大小）和 time steps（时间步数）的概念，避免逻辑混淆。虽然早期版本可能允许直接预测，但新版本需严格遵循批次处理规则。若需状态保持，可能需要使用 stateful=True 的 RNN 层并手动管理状态重置。","https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fissues\u002F98",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},15704,"在处理不同长度的序列进行 Sequence-to-Sequence 学习时，如何处理填充（padding）以避免偏差损失函数？","对于不同长度的序列：\n1. 可以对输入和目标序列进行零填充（padding）。\n2. 关于损失函数处理填充值的问题，可以尝试不指定固定的 `input_length`，而是使用 `input_shape=(None, input_dim)` 让模型适应可变长度。\n3. 现代实践更推荐使用 Encoder-Decoder 架构配合注意力机制（Attention）来处理此类问题，这通常比简单的填充加掩码更有效且能更好地处理变长序列。","https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fissues\u002F395",{"id":141,"question_zh":142,"answer_zh":143,"source_url":129},15705,"在使用 ModelCheckpoint 回调保存多 GPU 模型时遇到权重保存错误或无法恢复训练的问题，如何解决？","在多 GPU 环境下使用 ModelCheckpoint 时：\n1. 避免设置 `save_weights_only=False` 来保存整个模型结构，这常导致权重顺序错乱。应仅保存权重（save_weights_only=True）。\n2. 优化器状态可能无法通过标准回调正确保存，导致无法无缝恢复训练。变通方法是每隔 N 个 epoch 保存一次完整的模型模板，而不是依赖断点续训。\n3. 如果必须修改源码，需注意在新版 Keras 中将 `save_weights_to_hdf5_group` 替换为 `saving.save_weights_to_hdf5_group(f, layers)`。\n4. 最稳妥的方式是训练时使用并行模型，但在保存和推理时切换回串行模型并加载权重。",{"id":145,"question_zh":146,"answer_zh":147,"source_url":124},15706,"为什么同样的模型和输入，每次运行加载后的预测结果都不相同（即使结果是合理的）？","这种非确定性通常源于导入方式或随机性控制不当：\n1. 检查导入语句：混用 `import tensorflow.keras` 和 `from keras` 可能导致后端行为不一致。建议统一使用一种导入方式（如 `import tensorflow.keras as k`）。\n2. 即使模型表现合理，代数计算也应是完全确定的。请确保在脚本开头固定所有随机种子（numpy, tensorflow, keras）。\n3. 某些情况下，列表推导式中加载模型（如 `[load_model(...) for ...]`）在不同导入上下文下表现不同，尝试显式循环加载或统一导入路径可解决此问题。",[149,154,159,164,169,174,179,184,189,194,199,204,209,214,219,224,229,234,239,244],{"id":150,"version":151,"summary_zh":152,"released_at":153},90396,"v3.14.0","## 亮点\n\n- **Orbax 检查点集成**：全面支持 Orbax 检查点，包括分片、远程路径和步骤恢复功能。\n- **量化升级**：新增对激活感知权重量化（AWQ）和非对称 INT4 子通道量化的支持。\n- **BatchNorm 中的批量归一化**：在 `BatchRenormalization` 层中添加了批量归一化功能。\n- **新优化器**：新增 `ScheduleFreeAdamW` 优化器。\n- **门控注意力机制**：在 `MultiHeadAttention` 和 `GroupedQueryAttention` 层中引入了可选的门控注意力机制支持。\n\n---\n\n## 新特性与操作\n\n### 多后端操作\n- **NaN 感知 NumPy 操作**：在 `keras.ops.numpy` 中新增了 `nanmin`、`nanmax`、`nanmean`、`nanmedian`、`nanvar`、`nanstd`、`nanprod`、`nanargmin`、`nanargmax` 和 `nanquantile` 的支持。\n- **新的数学与线性代数算子**：新增了 `nextafter`、`ptp`、`view`、`sinc`、`fmod`、`i0`、`fliplr`、`flipud`、`rad2deg`、`geomspace`、`depth_to_space`、`space_to_depth` 和 `fold`。\n\n### 预处理与层\n- **CLAHE 层**：新增对比度受限自适应直方图均衡化预处理层。\n- **迭代对象适配支持**：预处理层现在支持在 `adapt()` 方法中使用 Python 迭代对象，从而可以直接使用 Grain 数据集。\n\n---\n\n## OpenVINO 后端支持\n\nOpenVINO 后端获得了重大更新，实现了广泛的 NumPy 和神经网络操作，以达到与其他后端的功能对等：\n\n- **NumPy 操作**：`vander`、`trapezoid`、`corrcoef`、`correlate`、`flip`、`diagonal`、`cbrt`、`hypot`、`trace`、`kron`、`argpartition`、`logaddexp2`、`ldexp`、`select`、`round`、`vstack`、`hsplit`、`vsplit`、`tile`、`nansum`、`tensordot`、`exp2`、`trunc`、`gcd`、`unravel_index`、`inner`、`cumprod`、`searchsorted`、`hanning`、`diagflat`、`norm`、`histogram`、`lcm`、`allclose`、`real`、`imag`、`isreal`、`kaiser`、`shuffle`、`einsum`、`quantile`、`conj`、`randint`、`in_top_k`、`signbit`、`gamma`、`heaviside`、`var`、`std`、`inv`、`solve`、`cholesky_inverse`、`fft`、`fft2`、`ifft2`、`rfft`、`irfft`、`stft`、`istft`、`scatter`、`binomial`、`unfold`、`QR 分解`、`view` 等。\n- **神经网络操作**：新增了对 `separable_conv`、`conv_transpose`、`adaptive_average_pool`、`adaptive_max_pool`、`RNN`、`LSTM` 和 `GRU` 的支持。\n- **控制流操作**：实现了 `cond`、`scan`、`associative_scan`、`map`、`switch`、`fori_loop` 和 `vectorized_map`。\n\n---\n\n## 错误修复与改进\n\n### 后端特定改进\n- **PyTorch**：导出中的动态形状支持、设备选择改进，以及基于 CuDNN 的 LSTM 和 GRU 实现的错误修复。\n- **JAX**：改进了 `FlaxLayer` 和 `JaxLayer` 中的随机数生成器处理、变量 jitting 改进，以及直接从 JAX 导出到 ONNX 的功能。\n- **NumPy**：为 NumPy 后端启用了掩码支持。\n\n### 其他改进\n- 修复了多个跨 l 的符号形状错误。","2026-04-03T01:45:49",{"id":155,"version":156,"summary_zh":157,"released_at":158},90397,"v3.12.1","## 安全修复与加固\n\n此版本为模型加载和保存引入了关键的安全加固措施，并改进了 JAX 后端的元数据处理。\n\n*   **在 `safe_mode` 模式下禁止 `TFSMLayer` 反序列化 ([#22035](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F22035))**\n    *   之前，在反序列化过程中，`TFSMLayer` 可以加载外部 TensorFlow SavedModel，而不受 Keras 的 `safe_mode` 限制。这可能导致在模型调用时执行攻击者控制的计算图。\n    *   现在，`TFSMLayer` 默认强制启用 `safe_mode`。通过 `from_config()` 进行反序列化时，除非显式传入 `safe_mode=False` 或调用 `keras.config.enable_unsafe_deserialization()`，否则会抛出 `ValueError` 异常。\n\n*   **修复 `KerasFileEditor` 中的拒绝服务 (DoS) 漏洞 ([#21880](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21880))**\n    *   引入对 HDF5 数据集元数据的验证，以防止“形状炸弹”攻击。\n    *   加固 `.keras` 文件编辑器，使其能够抵御恶意元数据引发的维度溢出或无界内存分配问题（例如，分配多 GB 大小的 NumPy 张量）。\n\n*   **阻止 HDF5 文件中的外部链接 ([#22057](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F22057))**\n    *   Keras 现在明确禁止在加载 HDF5 文件时使用外部链接。此举可防范潜在的安全风险，即权重文件可能指向外部系统数据集。\n    *   同时，增强了对 H5 组和数据集的验证，确保其为本地且有效。\n\n## 保存与序列化\n\n*   **提升 H5IOStore 的完整性 ([#22057](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F22057))**\n    *   重构了 `H5IOStore` 和 `ShardedH5IOStore`，移除了未使用且未经验证的方法。\n    *   修复了分片 HDF5 存储中的键顺序逻辑，以确保在不同环境中加载状态的一致性。\n\n---\n\n### 致谢\n特别感谢报告这些漏洞并协助实施修复的安全研究人员和贡献者：@0xManan、@HyperPS 和 @hertschuh。\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fcompare\u002Fv3.12.0...v3.12.1","2026-01-30T18:36:05",{"id":160,"version":161,"summary_zh":162,"released_at":163},90398,"v3.13.2","## 安全修复与加固\n\n此版本为模型加载和保存引入了关键的安全加固措施，并改进了 JAX 后端的元数据处理。\n\n*   **在 `safe_mode` 模式下禁止 `TFSMLayer` 反序列化 ([#22035](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F22035))**\n    *   之前，在反序列化过程中，`TFSMLayer` 可以在不遵守 Keras `safe_mode` 设置的情况下加载外部 TensorFlow SavedModel。这可能导致在模型调用时执行攻击者控制的计算图。\n    *   现在，`TFSMLayer` 默认强制启用 `safe_mode`。通过 `from_config()` 进行反序列化时，除非显式传递 `safe_mode=False` 或调用 `keras.config.enable_unsafe_deserialization()`，否则将抛出 `ValueError` 异常。\n\n*   **修复 `KerasFileEditor` 中的拒绝服务 (DoS) 漏洞 ([#21880](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21880))**\n    *   引入对 HDF5 数据集元数据的验证，以防止“形状炸弹”攻击。\n    *   加固 `.keras` 文件编辑器，使其能够抵御恶意元数据引发的维度溢出或无界内存分配问题（例如，分配多 GB 大小的 NumPy 张量）。\n\n*   **阻止 HDF5 文件中的外部链接 ([#22057](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F22057))**\n    *   Keras 现在明确禁止在加载 HDF5 文件时使用外部链接。此举可防范潜在的安全风险，即权重文件可能指向外部系统数据集。\n    *   同时，增强了对 H5 组和数据集的验证，确保其为本地且有效。\n\n## 后端特定改进（JAX）\n\n*   **在 `nnx_metadata` 中默认设置 `mutable=True` ([#22074](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F22074))**\n    *   更新了 JAX 后端逻辑，确保在 `nnx_metadata` 中变量默认被视为可变。\n    *   这使得 Keras 3.13.2 在启用 Keras NNX 集成时与 Flax 0.12.3 兼容。\n\n## 保存与序列化\n\n*   **提升 H5IOStore 的完整性 ([#22057](https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F22057))**\n    *   重构了 `H5IOStore` 和 `ShardedH5IOStore`，移除了未使用且未经验证的方法。\n    *   修复了分片 HDF5 存储中的键顺序逻辑，以确保在不同环境中加载状态的一致性。\n\n---\n\n### 贡献者\n我们感谢以下贡献者提供的安全报告和代码改进：\n@0xManan、@HyperPS、@hertschuh 和 @divyashreepathihalli。\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fcompare\u002Fv3.13.1...v3.13.2","2026-01-30T01:03:36",{"id":165,"version":166,"summary_zh":167,"released_at":168},90399,"v3.13.1","### 错误修复与改进\n\n* **常规**\n    * 移除了在使用 NumPy 2.0 或更高版本时，执行 `import keras` 时触发的持续警告。（#21949）\n* **后端**\n    * **JAX：** 修复了在使用 JAX 0.6.2 以上版本时，CUDNN Flash Attention 功能失效的问题。（#21970）\n* **导出与序列化**\n    * 解决了导出流程中的回归问题，该问题曾错误地将批大小强制设置为动态。现在，导出过程能够在定义了静态批大小时正确尊重这一设置。（#21944）\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fcompare\u002Fv3.13.0...v3.13.1","2026-01-14T19:04:41",{"id":170,"version":171,"summary_zh":172,"released_at":173},90400,"v3.13.0","## 重大变更\n\n自 3.13.0 版本起，Keras 现在要求使用 `Python 3.11` 或更高版本。请确保您的环境已升级到 Python 3.11+，以便安装最新版本。\n\n## 亮点\n\n### LiteRT 导出\n\n现在您可以将 Keras 模型直接导出为 LiteRT 格式（原 TensorFlow Lite），用于设备端推理。此次更新带来了输入签名处理和导出工具文档的改进。这些更改确保 LiteRT 导出仅在安装了 TensorFlow 的情况下可用，同时更新了导出 API 和文档，并增强了对各类模型类型的输入签名推断能力。\n\n示例：\n\n```python\nimport keras\nimport numpy as np\n\n# 1. 定义一个简单模型\nmodel = keras.Sequential([\n    keras.layers.Input(shape=(10,)),\n    keras.layers.Dense(10, activation=\"relu\"),\n    keras.layers.Dense(1, activation=\"sigmoid\")\n])\n\n# 2. 编译并训练（可选，但建议在导出前进行）\nmodel.compile(optimizer=\"adam\", loss=\"binary_crossentropy\")\nmodel.fit(np.random.rand(100, 10), np.random.randint(0, 2, 100), epochs=1)\n\n# 3. 将模型导出为 LiteRT 格式\nmodel.export(\"my_model.tflite\", format=\"litert\")\n\nprint(\"模型已成功导出为 'my_model.tflite'，格式为 LiteRT。\")\n``` \n\n### GPTQ 量化\n\n* 引入了 `keras.quantizers.QuantizationConfig` API，允许用户自定义权重和激活量化的配置，从而提供更大的灵活性来定义量化方案。\n    \n* 在 `Model.quantize` 方法中新增了 `filters` 参数，支持通过正则表达式字符串、正则表达式字符串列表或可调用函数来指定需要量化的层。这为量化过程提供了更精细的控制。\n* 对 GPTQ 量化流程进行了重构，移除了基于启发式的模型结构检测机制。取而代之的是，现在可以通过 `GPTQConfig` 显式指定模型的量化结构，或者通过重写新的 `Model.get_quantization_layer_structure` 方法来实现，从而提升对不同模型架构的灵活性和鲁棒性。\n* 核心层如 `Dense`、`EinsumDense`、`Embedding` 和 `ReversibleEmbedding` 已更新，以接受并使用新的 `QuantizationConfig` 对象，从而实现对其量化行为的精细控制。\n* 在 Model 类中新增了 `get_quantization_layer_structure` 方法，供模型作者定义 GPTQ 等结构感知量化模式所需的拓扑结构。\n* 引入了一个新的实用函数 `should_quantize_layer`，用于集中管理根据提供的过滤器判断某一层是否应被量化的逻辑。\n* 实现了 `QuantizationConfig` 对象在 Keras 层中的序列化和反序列化功能，使得量化后的模型可以正确地保存和加载。\n* 修改了 `AbsMaxQuantizer`，使其能够在 `__call__` 方法中动态指定量化轴，而不再严格地在初始化时固定量化轴。\n\n示例","2025-12-18T00:06:45",{"id":175,"version":176,"summary_zh":177,"released_at":178},90401,"v3.12.0","## 亮点\n\n### Keras 新增模型蒸馏 API！\n\n现在，您可以通过一个易于使用的 API 将大型模型蒸馏为小型模型，同时最大限度地减少在参考数据集上的性能下降——该 API 与所有现有 Keras 模型兼容。您可以指定多种不同的蒸馏损失函数，也可以自定义损失函数。该 API 支持同时使用多个蒸馏损失。\n\n示例：\n\n```python\n# 加载待蒸馏的教师模型\nteacher = ...\n# 这是我们希望蒸馏到的目标学生模型\nstudent = ...\n\n# 配置蒸馏流程\ndistiller = Distiller(\n    teacher=teacher,\n    student=student,\n    distillation_losses=LogitsDistillation(temperature=3.0),\n)\ndistiller.compile(\n    optimizer='adam',\n    loss='sparse_categorical_crossentropy',\n    metrics=['accuracy']\n)\n\n# 训练蒸馏后的模型\ndistiller.fit(x_train, y_train, epochs=10)\n``` \n\n### Keras 支持 GPTQ 量化！\n\nGPTQ 现已内置到 Keras API 中。GPTQ 是一种训练后仅对权重进行量化的压缩方法，它会逐层将模型压缩至 int4 格式。对于每一层，GPTQ 使用二阶方法更新权重，同时在校准数据集上最小化误差。\n\n请参阅[本指南](https:\u002F\u002Fkeras.io\u002Fguides\u002Fgptq_quantization_in_keras\u002F)了解如何使用。\n\n示例：\n\n```python\nmodel = keras_hub.models.Gemma3CausalLM.from_preset(\"gemma3_1b\")\ngptq_config = keras.quantizers.GPTQConfig(\n    dataset=calibration_dataset,\n    tokenizer=model.preprocessor.tokenizer,\n    weight_bits=4,\n    group_size=128,\n    num_samples=256,\n    sequence_length=256,\n    hessian_damping=0.01,\n    symmetric=False,\n    activation_order=False,\n)\nmodel.quantize(\"gptq\", config=gptq_config)\noutputs = model.generate(prompt, max_length=30)\n``` \n\n### 更好地支持 [Grain](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fgrain) 数据集！\n\n- 为 `keras.utils.image_dataset_from_directory` 和 `keras.utils.text_dataset_from_directory` 添加 Grain 支持。只需指定 `format=\"grain\"`，即可返回 Grain 数据集，而非 TF 数据集。\n- 使几乎所有 Keras 预处理层都与 Grain 数据集兼容。\n\n## 新特性\n\n- 新增 `keras.layers.ReversibleEmbedding` 层：这是一种既能嵌入又能反向投影回输入空间的嵌入层。可在 `call()` 方法中使用 `reverse` 参数。\n- 在 `model.export()` 中新增 `opset_version` 参数。该参数专用于 `format=\"onnx\"`，用于指定 ONNX 的算子集版本。\n- 新增 `keras.ops.isin` 操作。\n- 新增 `keras.ops.isneginf` 和 `keras.ops.isposinf` 操作。\n- 新增 `keras.ops.isreal` 操作。\n- 新增 `keras.ops.cholesky_inverse` 操作，并在 `keras.ops.cholesky` 中新增 `upper` 参数。\n- 新增 `keras.ops.image.scale_and_translate` 操作。\n- 新增 `keras.ops.hypot` 操作。\n- 新增 `keras.ops.gcd` 操作。\n- 新增 `keras.ops.kron` 操作。\n- 新增 `keras.ops.logaddexp2` 操作。\n- 新增 `keras.ops.view` 操作。\n- 新增 `keras.ops.unfold` 操作。\n- 新增 `keras.ops.jvp` 操作。\n- 新增 `keras.ops.trapezoid` 操作。\n- OpenVINO 后端新增对 20 多种新操作的支持。\n","2025-10-27T20:22:24",{"id":180,"version":181,"summary_zh":182,"released_at":183},90402,"v3.11.3","## 变更内容\n* 版本升级至 3.11.3，由 @rtg0795 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21607 中完成。\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fcompare\u002Fv3.11.2...v3.11.3","2025-08-22T17:48:22",{"id":185,"version":186,"summary_zh":187,"released_at":188},90403,"v3.11.2","## 变更内容\n* 版本升级至 3.11.2，并由 @laxmareddyp 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21570 中修复了 nnx 相关问题 #21565\n\n## 新贡献者\n* @laxmareddyp 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21570 中完成了首次贡献\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fcompare\u002Fv3.11.1...v3.11.2","2025-08-11T21:12:49",{"id":190,"version":191,"summary_zh":192,"released_at":193},90404,"v3.11.1","## 变更内容\n* 版本升级至 3.11.1，由 @rtg0795 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21535 中完成\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fcompare\u002Fv3.11.0...v3.11.1","2025-07-31T22:02:03",{"id":195,"version":196,"summary_zh":197,"released_at":198},90405,"v3.11.0","## 变更内容\n\n- 添加了 int4 量化支持。\n- 在 `fit()`\u002F`evaluate()`\u002F`predict()` 中支持 [Grain](https:\u002F\u002Fgithub.com\u002Fgoogle\u002Fgrain) 数据加载器。\n- 添加了 `keras.ops.kaiser` 函数。\n- 添加了 `keras.ops.hanning` 函数。\n- 添加了 `keras.ops.cbrt` 函数。\n- 添加了 `keras.ops.deg2rad` 函数。\n- 添加了 `keras.ops.layer_normalization` 函数，以利用后端特定的性能优化。\n- 各种错误修复和性能优化。\n\n## 后端特定变更\n\n### JAX 后端\n\n- 支持 NNX 库。现在可以将 Keras 层和模型用作 NNX 模块。\n- 支持切片操作中使用形状 `-1`。\n\n\n### TensorFlow 后端\n\n- 在 `Flatten` 层中添加了对多个动态维度的支持。\n\n\n### OpenVINO 后端\n\n- 添加了对 30 多个新后端运算的支持。\n\n\n## 新贡献者\n\n* @adar21a 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21275 中做出了首次贡献。\n* @Phil2852 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21304 中做出了首次贡献。\n* @dayo09 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21340 中做出了首次贡献。\n* @iazzi 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21370 中做出了首次贡献。\n* @timovdk 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21385 中做出了首次贡献。\n* @mohiuddin-khan-shiam 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21392 中做出了首次贡献。\n* @p-wysocki 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21317 中做出了首次贡献。\n* @Gayathri-K-Binoy 在 https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21493 中做出了首次贡献。\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fcompare\u002Fv3.10.0...v3.11.0","2025-07-29T23:59:38",{"id":200,"version":201,"summary_zh":202,"released_at":203},90406,"v3.10.0","## New features\r\n\r\n- Add support for weight sharding for saving very large models with `model.save()`. It is controlled via the `max_shard_size` argument. Specifying this argument will split your Keras model weight file into chunks of this size at most. Use `load_model()` to reload the sharded files.\r\n- Add optimizer `keras.optimizers.Muon`\r\n- Add image preprocessing layer `keras.layers.RandomElasticTransform`\r\n- Add loss function `keras.losses.CategoricalGeneralizedCrossEntropy` (with functional version `keras.losses.categorical_generalized_cross_entropy`)\r\n- Add `axis` argument to `SparseCategoricalCrossentropy`\r\n- Add `lora_alpha` to all LoRA-enabled layers. If set, this parameter scales the low-rank adaptation delta during the forward pass.\r\n- Add activation function `keras.activations.sparse_sigmoid`\r\n- Add op `keras.ops.image.elastic_transform`\r\n- Add op `keras.ops.angle`\r\n- Add op `keras.ops.bartlett`\r\n- Add op `keras.ops.blackman`\r\n- Add op `keras.ops.hamming`\r\n- Add ops `keras.ops.view_as_complex`, `keras.ops.view_as_real`\r\n\r\n### PyTorch backend\r\n\r\n- Add cuDNN support for LSTM with the PyTorch backend\r\n\r\n### TensorFlow backend\r\n\r\n- Add `tf.RaggedTensor` support to `Embedding` layer\r\n- Add variable-level support for `synchronization` argument\r\n\r\n### OpenVINO backend\r\n\r\n- Add support for over 50 additional Keras ops in the OpenVINO inference backend!\r\n\r\n## New Contributors\r\n\r\n* @JyotinderSingh made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20993\r\n* @SaifMohammed22 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20982\r\n* @11happy made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20940\r\n* @jpy794 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21008\r\n* @chiruu12 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20950\r\n* @arkhamHack made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21010\r\n* @samitshah1 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21036\r\n* @nathanrooy made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21056\r\n* @rfezzani made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21053\r\n* @drasmuss made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21072\r\n* @pass-lin made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21037\r\n* @wilsbj made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21077\r\n* @timsweeneyfanelli made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21081\r\n* @darshil929 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21042\r\n* @superbobry made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21106\r\n* @nithin9000 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21136\r\n* @Huanli-Gong made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21141\r\n* @he7d3r made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21098\r\n* @Kayyuri made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21125\r\n* @b05505027 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21139\r\n* @Hmm-1224 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21060\r\n* @hridaya14 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21138\r\n* @pschuh made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21164\r\n* @cantonios made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21184\r\n* @victorgearhead made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21129\r\n* @srinjoydutta03 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21168\r\n* @SiddharthV147 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21231\r\n* @emmanuel-ferdman made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21241\r\n* @pctablet505 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21254\r\n* @Imokutmfon made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21257\r\n* @sanleo-wq made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F21269\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fcompare\u002Fv3.9.0...v3.10.0","2025-05-19T22:57:56",{"id":205,"version":206,"summary_zh":207,"released_at":208},90407,"v3.9.2","## What's Changed\r\n* Fix Remat error when called with a model.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fcompare\u002Fv3.9.1...v3.9.2","2025-04-02T20:22:14",{"id":210,"version":211,"summary_zh":212,"released_at":213},90408,"v3.9.1","## What's Changed\r\n* Fix flash attention TPU error\r\n* Fix incorrect argument in JAX flash attention.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fcompare\u002Fv3.9.0...v3.9.1","2025-03-27T17:17:18",{"id":215,"version":216,"summary_zh":217,"released_at":218},90409,"v3.9.0","## New features\r\n\r\n- Add new Keras rematerialization API: `keras.RematScope` and `keras.remat`. It can be used to turn on rematerizaliation for certain layers in fine-grained manner, e.g. only for layers larger than a certain size, or for a specific set of layers, or only for activations.\r\n- Increase op coverage for OpenVINO backend.\r\n- New operations:\r\n    - `keras.ops.rot90`\r\n    - `keras.ops.rearrange` (Einops-style)\r\n    - `keras.ops.signbit`\r\n    - `keras.ops.polar`\r\n    - `keras.ops.image.perspective_transform`\r\n    - `keras.ops.image.gaussian_blur`\r\n- New layers:\r\n    - `keras.layers.RMSNormalization`\r\n    - `keras.layers.AugMix`\r\n    - `keras.layers.CutMix`\r\n    - `keras.layers.RandomInvert`\r\n    - `keras.layers.RandomErasing`\r\n    - `keras.layers.RandomGaussianBlur`\r\n    - `keras.layers.RandomPerspective`\r\n- Minor additions:\r\n    - Add support for `dtype` argument to `JaxLayer` and `FlaxLayer` layers\r\n    - Add boolean input support to `BinaryAccuracy` metric\r\n    - Add `antialias` argument to `keras.layers.Resizing` layer.\r\n- Security fix: disallow object pickling in saved `npz` model files (numpy format). Thanks to [Peng Zhou](https:\u002F\u002Fgithub.com\u002Fzpbrent) for reporting the vulnerability.\r\n\r\n\r\n## New Contributors\r\n\r\n* @harshaljanjani made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20745\r\n* @doncarlos999 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20641\r\n* @sonali-kumari1 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20775\r\n* @jurca made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20810\r\n* @zskendall made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20805\r\n* @apehex made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20803\r\n* @nikolasavic3 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20896\r\n* @ibraaaa made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20926\r\n* @AsVoider made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20921\r\n* @abheesht17 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20932\r\n* @Mohamed-Ashraf273 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20934\r\n* @kuanxian1 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20951\r\n* @praveenhosdrug123 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20916\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fcompare\u002Fv3.8.0...v3.9.0","2025-03-04T23:24:48",{"id":220,"version":221,"summary_zh":222,"released_at":223},90410,"v3.8.0","## New: OpenVINO backend\r\n\r\nOpenVINO is now available as an infererence-only Keras backend. You can start using it by setting the `backend` field to `\"openvino\"` in your `keras.json` config file.\r\n\r\nOpenVINO is a deep learning inference-only framework tailored for CPU (x86, ARM), certain GPUs (OpenCL capable, integrated and discrete) and certain AI accelerators (Intel NPU).\r\n\r\nBecause OpenVINO does not support gradients, you cannot use it for training (e.g. `model.fit()`) -- only inference. You can train your models with the JAX\u002FTensorFlow\u002FPyTorch backends, and when trained, reload them with the OpenVINO backend for inference on a target device supported by OpenVINO.\r\n\r\n## New: ONNX model export\r\n\r\nYou can now export your Keras models to the ONNX format from the JAX, TensorFlow, and PyTorch backends.\r\n\r\nJust pass `format=\"onnx\"` in your `model.export()` call:\r\n\r\n```python\r\n# Export the model as a ONNX artifact\r\nmodel.export(\"path\u002Fto\u002Flocation\", format=\"onnx\")\r\n\r\n# Load the artifact in a different process\u002Fenvironment\r\nort_session = onnxruntime.InferenceSession(\"path\u002Fto\u002Flocation\")\r\n\r\n# Run inference\r\nort_inputs = {\r\n    k.name: v for k, v in zip(ort_session.get_inputs(), input_data)\r\n}\r\npredictions = ort_session.run(None, ort_inputs)\r\n```\r\n\r\n## New: Scikit-Learn API compatibility interface\r\n\r\nIt's now possible to easily integrate Keras models into Sciki-Learn pipelines! The following wrapper classes are available:\r\n\r\n- `keras.wrappers.SKLearnClassifier`: implements the sklearn `Classifier` API\r\n- `keras.wrappers.SKLearnRegressor`: implements the sklearn `Regressor` API\r\n- `keras.wrappers.SKLearnTransformer`: implements the sklearn `Transformer` API\r\n\r\n\r\n## Other feature additions\r\n\r\n- Add new ops:\r\n    - Add `keras.ops.diagflat`\r\n    - Add `keras.ops.unravel_index`\r\n- Add new activations:\r\n    - Add `sparse_plus` activation\r\n    - Add `sparsemax` activation\r\n- Add new image augmentation and preprocessing layers:\r\n    - Add `keras.layers.RandAugment`\r\n    - Add `keras.layers.Equalization`\r\n    - Add `keras.layers.MixUp`\r\n    - Add `keras.layers.RandomHue`\r\n    - Add `keras.layers.RandomGrayscale`\r\n    - Add `keras.layers.RandomSaturation`\r\n    - Add `keras.layers.RandomColorJitter`\r\n    - Add `keras.layers.RandomColorDegeneration`\r\n    - Add `keras.layers.RandomSharpness`\r\n    - Add `keras.layers.RandomShear`\r\n- Add argument `axis` to `tversky` loss\r\n\r\n## JAX specific changes\r\n\r\n- Add support for JAX named scope\r\n\r\n## TensorFlow specific changes\r\n\r\n- Make `keras.random.shuffle` XLA compilable\r\n\r\n## PyTorch specific changes\r\n\r\n- Add support for `model.export()`  and `keras.export.ExportArchive` with the PyTorch backend, supporting both the TF SavedModel format and the ONNX format.\r\n\r\n## New Contributors\r\n\r\n* @LavanyaKV1234 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20553\r\n* @jakubxy08 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20563\r\n* @dhantule made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20565\r\n* @roebel made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20575\r\n* @Surya2k1 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20613\r\n* @edge7 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20584\r\n* @adrinjalali made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20599\r\n* @mmicu made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20655\r\n* @rkazants made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F19727\r\n* @lkk7 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20682\r\n* @Furkan-rgb made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20684\r\n* @punkeel made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20694\r\n* @kas2020-commits made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20709\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fcompare\u002Fv3.7.0...v3.8.0","2025-01-07T18:27:26",{"id":225,"version":226,"summary_zh":227,"released_at":228},90411,"v3.7.0","## API changes\r\n\r\n- Add `flash_attention` argument to `keras.ops.dot_product_attention` and to `keras.layers.MultiHeadAttention`.\r\n- Add `keras.layers.STFTSpectrogram` layer (to extract STFT spectrograms from inputs as a preprocessing step) as well as its initializer `keras.initializers.STFTInitializer`.\r\n- Add `celu`, `glu`, `log_sigmoid`, `hard_tanh`, `hard_shrink`, `squareplus` activations.\r\n- Add `keras.losses.Circle` loss.\r\n- Add image visualization utilities `keras.visualization.draw_bounding_boxes`, `keras.visualization.draw_segmentation_masks`, `keras.visualization.plot_image_gallery`, `keras.visualization.plot_segmentation_mask_gallery`.\r\n- Add `double_checkpoint` argument to `BackupAndRestore` to save a fallback checkpoint in case the first checkpoint gets corrupted.\r\n- Add bounding box preprocessing support to image augmentation layers `CenterCrop`, `RandomFlip`, `RandomZoom`, `RandomTranslation`, `RandomCrop`.\r\n- Add `keras.ops.exp2`, `keras.ops.inner` operations.\r\n\r\n## Performance improvements\r\n\r\n- JAX backend: add native Flash Attention support for GPU (via cuDNN) and TPU (via a Pallas kernel). Flash Attention is now used automatically when the hardware supports it.\r\n- PyTorch backend: add native Flash Attention support for GPU (via cuDNN). It is currently opt-in.\r\n- TensorFlow backend: enable more kernel fusion via `bias_add`.\r\n- PyTorch backend: add support for Intel XPU devices.\r\n\r\n## New Contributors\r\n\r\n* @mostafa-mahmoud made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20313\r\n* @TrAyZeN made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20321\r\n* @dryglicki made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20353\r\n* @jm-willy made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20352\r\n* @Gopi-Uppari made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20377\r\n* @nicolaspi made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20383\r\n* @sineeli made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20368\r\n* @LakshmiKalaKadali made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20403\r\n* @mwtoews made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20427\r\n* @mrry made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20438\r\n* @rohithpudari made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20447\r\n* @ma7555 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20452\r\n* @jakevdp made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20469\r\n* @lcs-crr made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20503\r\n* @rameshdange5191 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20525\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fcompare\u002Fv3.6.0...v3.7.0","2024-11-26T17:02:51",{"id":230,"version":231,"summary_zh":232,"released_at":233},90412,"v3.6.0","## Highlights\r\n\r\n* New file editor utility: `keras.saving.KerasFileEditor`. Use it to inspect, diff, modify and resave Keras weights files. [See basic workflow here](https:\u002F\u002Fcolab.research.google.com\u002Fdrive\u002F1b1Rxf8xbOkMyvjpdJDrGzSnisyXatJsW?usp=sharing).\r\n* New `keras.utils.Config` class for managing experiment config parameters.\r\n\r\n## BREAKING changes\r\n\r\n* When using `keras.utils.get_file`, with `extract=True` or `untar=True`, the return value will be the path of the extracted directory, rather than the path of the archive.\r\n\r\n## Other changes and additions\r\n\r\n* Logging is now asynchronous in `fit()`, `evaluate()`, `predict()`. This enables 100% compact stacking of `train_step` calls on accelerators (e.g. when running small models on TPU).\r\n    - If you are using custom callbacks that rely on `on_batch_end`, this will disable async logging. You can force it back by adding `self.async_safe = True` to your callbacks. Note that the `TensorBoard` callback isn't considered async safe by default. Default callbacks like the progress bar are async safe.\r\n* Added `keras.saving.KerasFileEditor` utility to inspect, diff, modify and resave Keras weights file.\r\n* Added `keras.utils.Config` class. It behaves like a dictionary, with a few nice features:\r\n    - All entries are accessible and settable as attributes, in addition to dict-style (e.g. `config.foo = 2` or `config[\"foo\"]` are both valid)\r\n    - You can easily serialize it to JSON via `config.to_json()`.\r\n    - You can easily freeze it, preventing future changes, via `config.freeze()`. \r\n* Added bitwise numpy ops:\r\n    * `bitwise_and`\r\n    * `bitwise_invert`\r\n    * `bitwise_left_shift`\r\n    * `bitwise_not`\r\n    * `bitwise_or`\r\n    * `bitwise_right_shift`\r\n    * `bitwise_xor`\r\n* Added math op `keras.ops.logdet`.\r\n* Added numpy op `keras.ops.trunc`.\r\n* Added `keras.ops.dot_product_attention`.\r\n* Added `keras.ops.histogram`.\r\n* Allow infinite `PyDataset` instances to use multithreading.\r\n* Added argument `verbose` in `keras.saving.ExportArchive.write_out()` method for exporting TF SavedModel.\r\n* Added `epsilon` argument in `keras.ops.normalize`.\r\n* Added `Model.get_state_tree()` method for retrieving a nested dict mapping variable paths to variable values (either as numpy arrays or backend tensors (default)). This is useful for rolling out custom JAX training loops.\r\n* Added image augmentation\u002Fpreprocessing layers `keras.layers.AutoContrast`, `keras.layers.Solarization`.\r\n* Added `keras.layers.Pipeline` class, to apply a sequence of layers to an input. This class is useful to build a preprocessing pipeline. Compared to a `Sequential` model, `Pipeline` features a few important differences:\r\n    - It's not a `Model`, just a plain layer.\r\n    - When the layers in the pipeline are compatible with `tf.data`, the pipeline will also remain `tf.data` compatible, independently of the backend you use.\r\n\r\n\r\n## New Contributors\r\n* @alexhartl made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20125\r\n* @Doch88 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20156\r\n* @edbosne made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20151\r\n* @ghsanti made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20185\r\n* @joehiggi1758 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20223\r\n* @AryazE made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20228\r\n* @sanskarmodi8 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20237\r\n* @himalayo made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20262\r\n* @nate2s made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20305\r\n* @DavidLandup0 made their first contribution in https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fpull\u002F20316\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fcompare\u002Fv3.5.0...v3.6.0","2024-10-03T19:44:23",{"id":235,"version":236,"summary_zh":237,"released_at":238},90413,"v3.5.0","## What's Changed\r\n\r\n* Add integration with the Hugging Face Hub. You can now save models to Hugging Face Hub directly from `keras.Model.save()` and load `.keras` models directly from Hugging Face Hub with `keras.saving.load_model()`.\r\n* Ensure compatibility with NumPy 2.0.\r\n* Add `keras.optimizers.Lamb` optimizer.\r\n* Improve `keras.distribution` API support for very large models.\r\n* Add `keras.ops.associative_scan` op.\r\n* Add `keras.ops.searchsorted` op.\r\n* Add `keras.utils.PyDataset.on_epoch_begin()` method.\r\n* Add `data_format` argument to `keras.layers.ZeroPadding1D` layer. \r\n* Bug fixes and performance improvements.\r\n\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fcompare\u002Fv3.4.1...v3.5.0","2024-08-12T20:41:44",{"id":240,"version":241,"summary_zh":242,"released_at":243},90414,"v3.4.1","This is a minor bugfix release.","2024-06-26T15:42:10",{"id":245,"version":246,"summary_zh":247,"released_at":248},90415,"v3.4.0","## Highlights\r\n\r\n- Add support for arbitrary, deeply nested input\u002Foutput structures in Functional models (e.g. dicts of dicts of lists of inputs or outputs...)\r\n- Add support for optional Functional inputs.\r\n- Introduce `keras.dtype_policies.DTypePolicyMap` for easy configuration of dtype policies of nested sublayers of a subclassed layer\u002Fmodel.\r\n- New ops:\r\n  - `keras.ops.argpartition`\r\n  - `keras.ops.scan`\r\n  - `keras.ops.lstsq`\r\n  - `keras.ops.switch`\r\n  - `keras.ops.dtype`\r\n  - `keras.ops.map`\r\n  - `keras.ops.image.rgb_to_hsv`\r\n  - `keras.ops.image.hsv_to_rgb`\r\n\r\n## What's changed\r\n\r\n- Add support for `float8` inference for `Dense` and `EinsumDense` layers.\r\n- Add custom `name` argument in all Keras Applications models.\r\n- Add `axis` argument in `keras.losses.Dice`.\r\n- Enable `keras.utils.FeatureSpace` to be used in a `tf.data` pipeline even when the backend isn't TensorFlow.\r\n- `StringLookup` layer can now take `tf.SparseTensor` as input.\r\n- `Metric.variables` is now recursive.\r\n- Add `training` argument to `Model.compute_loss()`.\r\n- Add `dtype` argument to all losses.\r\n- `keras.utils.split_dataset` now supports nested structures in dataset.\r\n- Bugs fixes and performance improvements.\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002Fkeras-team\u002Fkeras\u002Fcompare\u002Fv3.3.3...v3.4.0","2024-06-25T05:12:36"]