From CISC to RISC:

Language-Model Guided Assembly Transpilation

Ahmed Heakl1*, Chaimaa Abi1*, Rania Hossam1, Abdulrahman Mahmoud1,

* Equal Contributions

1Mohamed bin Zayed University of AI,

geometric reasoning

CRT pipeline stages: Data (AnghaBench data curation), Experimentation (model tuning and accuracy), and Optimization & Deployment (final training and Rosetta evaluation).

News Icon News

[2024-11-04]: Our quantized models are available at HuggingFace. We welcome all usage and improvements!

Abstract

The transition from x86 to ARM architecture is becoming increasingly common across various domains, primarily driven by ARM's energy efficiency and improved performance across traditional sectors. However, this ISA shift poses significant challenges, mainly due to the extensive legacy ecosystem of x86 software, and lack of portability across proprietary ecosystems and software stacks. This paper introduces CRT, a lightweight LLM-based transpiler that automatically converts x86 assembly to ARM assembly. Our approach bridges the fundamental architectural gap between x86's CISC-based and ARM's RISC-based computing paradigms while preserving program semantics and optimizing performance.

We evaluate CRT on diverse real-world applications, achieving 79.25% translation accuracy from x86 to ARMv5 on our comprehensive test suite, and a 88.68% accuracy from x86 to RISC-V. In practical deployments on Apple M2 hardware (ARMv8), our transpiled code achieves 1.73x speedup compared to Apple's Rosetta 2 virtualization engine, while delivering 2.41x memory efficiency and 1.47x better energy consumption. Through testing and analysis, we show that CRT successfully navigates the CISC/RISC divide, and generates correctly executable RISC code despite machine language barriers. We release our codes, models, training dataset, and benchmark as open-source.

Results and Analysis

Input ldr r1, r2
Tokenizer Tokens
DeepSeek/Yi-Coder ld r r1 , r2
Our Extended Tokenizer ldr r1 , r2
Table 1. Comparison of tokenization approaches between DeepSeek/Yi-Coder and our extended tokenizer. Spaces are represented as ␣ and shown with colored backgrounds to highlight token boundaries. Note how our tokenizer groups related tokens (e.g., ldr and r1) as singular units.