Convert Figma logo to code with AI

kanshurichard logoenableAppleAI

Enable Apple Intelligence on Macs sold in Mainland China with SIP enabled, tested on MacOS 15.4.1+ and 26.1 beta

2,140
86
2,140
5

Top Related Projects

Stable Diffusion with Core ML on Apple Silicon

32,410

🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.

High-Resolution Image Synthesis with Latent Diffusion Models

Stable Diffusion web UI

Quick Overview

EnableAppleAI is a GitHub repository that appears to be empty or non-existent. As of the current date, there is no content, code, or documentation available in this repository. The name suggests it might be intended for a project related to enabling or integrating Apple's AI technologies, but without any actual content, it's impossible to provide a meaningful description.

Pros

  • The repository name suggests a potentially interesting focus on Apple AI technologies
  • Could be a placeholder for future development in an emerging tech area

Cons

  • Repository is currently empty with no code or documentation
  • Lack of information makes it impossible to assess the project's goals or progress
  • No indication of when or if content will be added
  • Potential users cannot benefit from or contribute to the project in its current state

As this repository is empty and not a code library, we'll skip the code examples and getting started instructions sections.

Competitor Comparisons

Stable Diffusion with Core ML on Apple Silicon

Pros of ml-stable-diffusion

  • More comprehensive and actively maintained project
  • Focuses on implementing Stable Diffusion models on Apple Silicon
  • Provides optimized performance for Apple devices

Cons of ml-stable-diffusion

  • Requires more setup and configuration
  • Limited to Apple devices and ecosystems
  • May have a steeper learning curve for beginners

Code Comparison

enableAppleAI:

import torch
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = pipe.to("mps")

ml-stable-diffusion:

import CoreML

let model = try MLModel(contentsOf: modelURL)
let input = MLFeatureProvider(dictionary: ["input": inputImage])
let output = try model.prediction(from: input)

The enableAppleAI repository appears to be a simpler implementation using PyTorch, while ml-stable-diffusion uses Swift and CoreML for native Apple integration. ml-stable-diffusion offers more optimized performance for Apple devices but requires more setup. enableAppleAI might be easier to get started with but may not fully leverage Apple-specific optimizations.

32,410

🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.

Pros of diffusers

  • Extensive library of pre-trained diffusion models for various tasks
  • Well-documented API with easy-to-use interfaces
  • Active community and frequent updates

Cons of diffusers

  • Larger resource requirements due to complex models
  • Steeper learning curve for beginners
  • May be overkill for simple AI tasks

Code Comparison

enableAppleAI:

let model = try VNCoreMLModel(for: YOLOv3().model)
let request = VNCoreMLRequest(model: model)
try? VNImageRequestHandler(ciImage: image).perform([request])

diffusers:

from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
image = pipe("A photo of a cat").images[0]

Summary

diffusers offers a comprehensive suite of diffusion models with extensive documentation and community support, making it suitable for complex AI tasks. However, it may require more resources and have a steeper learning curve compared to enableAppleAI. The latter appears more focused on integrating Apple's AI capabilities, potentially offering a simpler approach for iOS-specific development. The code comparison shows diffusers using a high-level API for image generation, while enableAppleAI demonstrates integration with Apple's Vision framework.

High-Resolution Image Synthesis with Latent Diffusion Models

Pros of stablediffusion

  • More comprehensive and actively maintained project with frequent updates
  • Larger community support and contributions
  • Advanced features for image generation and manipulation

Cons of stablediffusion

  • More complex setup and installation process
  • Higher computational requirements for running the model
  • Steeper learning curve for beginners

Code Comparison

enableAppleAI:

import coremltools as ct

mlmodel = ct.convert(model, source='pytorch')
mlmodel.save('model.mlpackage')

stablediffusion:

from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
image = pipe(prompt="a photo of an astronaut riding a horse on mars").images[0]

The enableAppleAI repository focuses on converting AI models for use with Apple's Core ML framework, making it easier to run on Apple devices. In contrast, stablediffusion is a more comprehensive project for generating and manipulating images using advanced AI techniques. While enableAppleAI is more specialized for Apple ecosystem integration, stablediffusion offers a wider range of features and flexibility for image generation tasks across various platforms.

Stable Diffusion web UI

Pros of stable-diffusion-webui

  • More comprehensive and feature-rich UI for Stable Diffusion
  • Extensive community support and active development
  • Wide range of extensions and plugins available

Cons of stable-diffusion-webui

  • Requires more setup and configuration
  • Higher system requirements for optimal performance
  • Steeper learning curve for beginners

Code Comparison

enableAppleAI:

import coremltools as ct

mlmodel = ct.convert(
    "model.onnx",
    convert_to="mlprogram",
    minimum_deployment_target=ct.target.iOS16
)

stable-diffusion-webui:

import modules.scripts
from modules import script_callbacks

def on_app_started(demo, app):
    # Custom initialization code

script_callbacks.on_app_started(on_app_started)

The code snippets highlight the different focus areas of the two projects. enableAppleAI is centered around converting models for Apple devices, while stable-diffusion-webui provides a framework for extending the web interface functionality.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

enableAppleAI

一种在MacOS上(目前测试MacOS 15.4.1+及26.1 beta均可),无需长期运行后台服务,也无需长期禁用SIP,即可永久稳定开启中国销售Mac设备上Apple AI的方法。

截屏2025-05-04 09 42 49

最新3.X版本介绍

  • 增加了方法2(来自 https://github.com/hyderay/AiOnMac 的启发):只需修改plist文件,不再需要使用lldb对系统进程进行任何调试——建议方法1失败时尝试。
  • 加入了对countryd的缓存文件修改,使得在MacOS26系统中,可以在设备位于中国等不支持的国家时正常开启Siri中的ChatGPT,Apple News及国际版苹果地图等功能(需要配合受支持地区的网络IP)
  • 3.1版本加入了对Foundation Model, Personal QA等功能的支持。

工作原理概述

---方法1(修改更彻底,建议优先尝试)---

尝试试绕过 Apple 对 Apple 智能的启用检查:

  1. 用一个来自这里的代码,使用lldb暂时注入eligibiltyd,模拟美版LL机型,使其向系统数据库输出该机型支持AI的信息(具体功能请参考该脚本的源仓库)。
  2. 修改 /private/var/db/eligibilityd/eligibility.plist 这个系统文件,特别是调整其中关于设备区域码 (OS_ELIGIBILITY_INPUT_DEVICE_REGION_CODE) 和外部启动盘 (OS_ELIGIBILITY_INPUT_EXTERNAL_BOOT_DRIVE) 的检查值,禁止系统用这些参数来作为功能开启的前提条件。
  3. 通过修改文件权限和设置 uchg (immutable) 标记,锁定修改后的各个缓存文件状态,防止系统刷新缓存文件。

---方法2(可解决方法1失败的奇怪问题,但可能新功能解锁不一定全面)---

  1. 修改 /private/var/db/eligibilityd/eligibility.plist等几个系统缓存文件,强制让MacOS系统认为设备符合开启Apple智能各项功能的要求。
  2. 通过修改文件权限和设置 uchg (immutable) 标记,锁定修改后的各个缓存文件状态,防止系统刷新缓存文件。

前置条件

  1. 一台运行兼容 macOS 版本的 Mac (M1或以上CPU,macOS 15.1或以上版本)。
  2. 管理员权限,因为脚本使用 sudo 执行特权命令。
  3. 系统地区设置为“美国”,系统语言、Siri语言均设置为简体中文(普通话)/中国,English(USA)或其他任何受到Apple智能支持的语言和地区——设置为其他不支持Apple AI的区域会导致开启失败。
  4. 稳定的互联网连接以下载脚本。
  5. SIP (System Integrity Protection) 已禁用。(破解完成后可重新开启,不影响AI功能)

执行步骤

请严格按照以下步骤操作:

步骤 1: 禁用 System Integrity Protection (SIP)

如果 SIP 已经禁用,可以跳过此步骤。如果未禁用,您必须手动禁用它:

  1. 重启 Mac。
  2. 在 Mac 启动时,长按开机键,直到进入 macOS 恢复模式,途中你可能需要输入几次密码。
  3. 在屏幕顶部的菜单栏中,选择 实用工具 (Utilities) > **终端 (Terminal)**。
  4. 在终端窗口中,输入以下命令并按回车:
    csrutil disable
    
  5. 按y键确认,之后您会看到一条 SIP 已禁用的消息。
  6. 在终端中,输入 reboot 并按回车,或者从 Apple 菜单中选择 重启 (Restart) 退出恢复模式并启动 Mac。

步骤 2: 下载并运行脚本

单命令快速执行方法:

如果您完全信任本脚本,可以使用以下单命令直接执行:

最新3.1脚本:

curl -sL https://raw.githubusercontent.com/kanshurichard/enableAppleAI/main/enable_ai.sh | bash

如果在国内访问困难,请尝试以下国内加速地址:

curl -sL https://cdn.jsdelivr.net/gh/kanshurichard/enableAppleAI@main/enable_ai.sh | bash

如果本版遇到问题,请您去提Issue,并可尝试2.13旧版:

curl -sL https://raw.githubusercontent.com/kanshurichard/enableAppleAI/main/enable_ai_old.sh | bash

手动执行脚本:

  1. 打开 终端 (Terminal) 应用程序。
  2. 使用 curl 下载脚本文件到当前目录:
    curl -O https://raw.githubusercontent.com/kanshurichard/enableAppleAI/main/enable_ai.sh
    
  3. 审查脚本内容: 使用文本编辑器或命令行工具(如 cat enable_ai.sh)仔细阅读下载的 enable_ai.sh 文件,确保您理解它将执行的操作。
  4. 赋予脚本执行权限:
    chmod +x enable_ai.sh
    
  5. 执行脚本:
    ./enable_ai.sh
    

步骤 3: 按照脚本提示操作

脚本完成执行后:

  1. 重启您的 Mac。
  2. 重启后,再次前往 系统设置 (System Settings) > 通用 (General) > Apple 智能 (Apple Intelligence) 检查功能状态。
  3. 如果 Apple 智能仍为开启状态,并且您希望恢复系统的安全性,**强烈建议**您再次进入恢复模式,执行 csrutil enable 命令重新启用 SIP。

故障排除与反馈

  • 如果脚本执行过程中遇到错误,请检查终端输出的错误信息。
  • 如果按照步骤执行后 Apple 智能未能成功启用,或者出现其他异常,您也可以在本项目的 GitHub Issues 中提交问题。

常见问题

问:如何卸载?
答:3.X版本已经加入了一键卸载功能。

问:之前在老版本系统(比如15.X)中运行过代码,现在更新了新版系统(比如26.X),为什么新系统新增的某些AI功能没有出现?
答:本代码会随着系统更新不断适配新增的AI功能(比如Foundation Model),请再次执行新版代码以开启新版系统新增支持的AI功能。执行前请【先卸载老代码,之后重启,再执行新代码】。

问:能否在登录国区账号为iCloud时开启AI?
答:目前3.X版本据报已经可以实现这个功能。

问:执行eligibilityd相关的注入代码时报错,怎么办?
答:3.0版本已不再有这个问题,请立刻尝试3.0新版代码。

问:开启AI后,Siri调用的仍然是百度百科这类国内工具,ChatGPT也无法正常调用,怎么办?
答:在26.X系统中,请使用3.X版本代码解锁,即可开启Siri的ChatGPT(仍需要受支持地区的IP地址)。在15.X系统中,Siri并不使用机型代码,而是使用你的IP地址和Wifi定位,判断是否调用国内服务(如百度),即使在外版机型上,也是这样的。请在设置-隐私与安全性-定位服务中关闭Siri的定位权限,并考虑将所有相关URL放入代理名单,如需更多帮助,可参考:https://nsringo.github.io。

问:图乐园(Image Playground)无法创建图片的原因?
答:图乐园目前不支持中文语言下创建图片,请将系统语言改为英语(美国),即可正常使用(似乎MacOS26系统中已经支持了中文环境下图乐园)。

问:是否能开启繁体中文(或其他XX语言)的Apple智能?
答:取决于Apple智能本身是否支持该语言。如果该语言尚未得到支持,即使在MacOS强制开启了Apple智能,也下载不到对应的语言文件(会一直卡在下载状态)。