-
Notifications
You must be signed in to change notification settings - Fork 434
Issues: openxla/xla
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[cpu] LLVM error during compilation of bf16 convert on aarch64
CPU
Related to XLA on CPU
#19105
opened Nov 6, 2024 by
hawkinsp
XLA:CPU performance regression with the min alignment changed from 16 to 64
#18611
opened Oct 22, 2024 by
snadampal
jax_cpu_enable_async_dispatch is degrading the inference performance on x86 and arm64 cpu backend
#18608
opened Oct 22, 2024 by
snadampal
Unexpected slow-down with
@jit
on simple functions that only use element-wise operations and jnp.roll()
on CPUs
#18478
opened Oct 18, 2024 by
pmocz
Unexpected speedup from wrapping function call in trivial jax.lax.cond statement
#18440
opened Oct 17, 2024 by
cgiovanetti
Deserialization of executables fails on non-zero ranks when deserializing single-device executable
#18286
opened Oct 14, 2024 by
jaro-sevcik
error: call of overloaded 'TileAssignment(<brace-enclosed initializer list>)' is ambiguous with gcc 10
#18140
opened Oct 10, 2024 by
elistevens
Inadequate memory consumption when using HSDP without gradient accumulation
#18090
opened Oct 9, 2024 by
qGentry
AVX512 quantization (cast from float to uint8) returns wrong results
#17800
opened Oct 1, 2024 by
Flamefire
[XLA:CPU] Support limiting LLVM codegen in Aarch64 and other new x86 instructions
CPU
Related to XLA on CPU
enhancement
New feature or request
#17758
opened Sep 30, 2024 by
penpornk
ElementalIrEmitterExecutionTest.ConvertFloatsToF8E4FN failed -0.0586 vs -0.0625
#17324
opened Sep 18, 2024 by
apivovarov
ElementalIrEmitterExecutionTest.IotaF8E4M3FN - Invalid LLVM IR
#17323
opened Sep 18, 2024 by
apivovarov
Tranposing to different layout permutations results in different numerics
#17276
opened Sep 17, 2024 by
elfiegg
Eagerly create common nccl communicator(s) during init
enhancement
New feature or request
NVIDIA-GPU
XLA on Nvidia GPU
#17108
opened Sep 12, 2024 by
skye
XLA flags: No speed ups on GPUs and segmentation fault
#17103
opened Sep 12, 2024 by
AakashKumarNain
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.