Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Illegal instruction #3

Open
roshanravan opened this issue Sep 20, 2023 · 7 comments
Open

Illegal instruction #3

roshanravan opened this issue Sep 20, 2023 · 7 comments

Comments

@roshanravan
Copy link

roshanravan commented Sep 20, 2023

when i enter 'chat' then It loads and I get "illegal instruction ./chat"

@roshanravan
Copy link
Author

roshanravan commented Sep 20, 2023

is it a CPU architecture error? the device i am using is OnePlus 6T (Rooted) and should be able to handle this

@Tempaccnt
Copy link
Owner

no, I believe this is caused by some versions of Android 13 that restricts the access to /storage/emulated/0/Android/data

unfortunately, termux need access to that folder otherwise it will have limited functionality. one of those limits seems to be running llama.cpp and alpaca.cpp which are the base core of my script

@roshanravan
Copy link
Author

But the device that I have problem is Android 10.
There's something else, this device is rooted and has LSXpose. It might cause the conflict

@Tempaccnt
Copy link
Owner

Tempaccnt commented Sep 21, 2023

I will try to find out the cause. unfortunately, I can't recreate it with any of my phones so I will see if any of my friends can recreate it on theirs.'

in the meantime, you could also check llama.cpp and alpaca.cpp for this error. there are many reports concerning the same issue there

@roshanravan
Copy link
Author

I have three Android device and also I get that error on the one with Android 13, but at first it was working just fine

@KonstantinGeist
Copy link

KonstantinGeist commented Jan 1, 2024

The error happens because of a bug in Snapdragon 8. It can be solved by changing "-mcpu=native" to "-mcpu=native+nosve" in the CMakeLists.txt file. However, the current project is terminally broken because it clones and compiles the head of the llama.cpp repo which doesn't support the model format which the script downloads.

@licryle
Copy link

licryle commented Jan 17, 2024

Not clean (a.k.a. llama should solve this first), but in the meantime, here's a workaround that worked for my snap 8.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants