Skip to main content

Interview with passionate Rust Developer Radu Marias(Xorio42)

Summary: #

This is an interview with the passionate Rust developer Radu Marias, Tune in as we follow his journey from working with Java for Android’s godfather to embracing Rust.

What was his rust journey like?
How do you start learning rust? What has he learned from building a distributed file system?
This and a lot more in this interview.

If you prefeer audio instead of reading, you can find the audio link at the end of the page.

Where are you based?

I am based in Bucharest, Romania, But I was living the digital nomad life style before, mostly based in Cambodia.

What is your background?

I’m a passionate Rust developer and cryptography enthusiast. I have a background more than 20 years of development, mostly in Java, but I experienced also see C, C++ and the related to this, A nice thing about my background is I worked at a company called Danger s in 2007.

It was a US-based company, and they built their own operating system. The operating system was built in C by some… There was another operating system back then called BOS (Broadband Operating System) . It was multimedia-oriented, and it it didn’t catch up. And the team then migrated, and most of the team migrated to this Danger company and built the operating system, and they have the applications written in Java, so they implemented their own JVM with the garbage collector and so on and actually there was a Brilliant, I think it’s it’s very is less said. It’s was a genius guy there, South Korean guy who actually built the Java runtime by himself the garbage collector and It was written mostly in Java.

Oh wow, so they built Java runtime by themselves?

Yeah. do lot of And this guy alone built the garbage collector, which okay is the basics of the Java runtime. And he was actually brilliant. so it was ah It was very, I remember a status call when he came and just realized there was a very, very professional, very well people with a lot of experience there. So there were people from B-os, like I said, a operating system company. There were people from, ah I think Panasonic, who works on the radio stuff. There were people with networking experience. that There are people that wrote the TCP/IP implementation. So there are very good people.

I was very honored to to work in that team. It is actually my favorite product that I’ve been involved in. A nice story is that it(Danger Inc) was co-founded by a guy called Andy Rubin. +

Maybe some of you know Andy Rubin and Andy Rubin left. He founded another phone company with an operating system and that company was bought by Google. Google brought on 10,000 more people and this is how Android was born. So I was working for the grandfather of Android.

Links:
https://en.wikipedia.org/wiki/Danger,_Inc.
https://en.wikipedia.org/wiki/Danger_Hiptop
https://en.wikipedia.org/wiki/Andy_Rubin

So after this project died because, long story short Microsoft bought it and they buried it, unfortunately. And they they did like, because ah it was a bit of a scandal back then, they did like a ah back and They did an upgrade of the Oracle database in the cloud and they wanted to reduce the cost so they didn’t have ah any backup. So they didn’t back up the DB before doing the upgrade and it was a to the software upgrade for the Oracle. And the the change log in the Oracle release was saying that optimizing the the space usage of the of the databases.

And they did such ah such a great optimization that they deleted all the data.

The cloud was the trusted source, and when devices had issues, support would instruct users to “reset from the cloud,” which deleted local data. However, this often led to all data being wiped from the device. To recover, we had to hack the system—faking IDs and modifying dates to make the device think its data was newer, forcing it to push contacts and info back to the server. This required access to the source code and creative workarounds.

So, they were like faking the timestamp, basically?

Yeah, yeah. This is the hack that we found to force the device to push the data and not to interpret. So somehow to interpret the the server changes as old changes.

When there was no internet connection, changes were made on both ends, causing sync conflicts. We forced the device version to take priority. Later, Hitachi engineers recovered some data from the hard disk, but because space was limited, new data overwrote old data, resulting in a 30-50% loss. The project lead realized that doing nothing for a month would have allowed full recovery. This happened around 2008-2009 when cloud concepts were still emerging. The company, Danger, had a contract stating, “Your data is now in the Danger infrastructure,” highlighting the early challenges of cloud technology.

It’s in the danger zone?

They joked, “Your data is in Danger,” referencing the company name. Despite a scandal, the team was passionate and skilled, using the phones personally and building the best product. In 2003, they had an email client, MSN, AOL, Yahoo, a browser, app store, Java apps, SSH, SAP, Telnet, and games. It was like BlackBerry, with traffic routed through their proxy, optimizing content for low-resolution displays. The proxy also buffered connections, allowing apps to function seamlessly even during signal loss, which was great for users and developers.

Another project involved EFA Check, a Portuguese electrical company. They had AutoCAD schemas for electrical systems, which took two weeks to digitize manually. We created a minimalistic editor, imported AutoCAD files, and used a neural network library (leaf detection) to identify inconsistent transformer drawings. Despite the inconsistencies, the neural network, after training, performed well, automating the process.

The challenge was figuring out how to start the recognition process. We began by randomly selecting a line (a connection between two points) and “walking the line” (inspired by Johnny Cash) using a Java class called Pathfinder. The idea was to follow the line to the nearest shape, recognize it, and then move to the next shape. We expanded the boundary around the shape, applied the neural network for recognition, and marked it as visited once identified. This process continued until all shapes were recognized.

To debug, I created a visual screen that drew lines and highlighted recognized shapes in real-time, with delays for clarity. This helped me see the process and ensure accuracy. The screen became a useful tool for post-recognition adjustments, allowing us to focus on recognized shapes rather than raw lines and squares.

The workflow involved using a legend from each schema to define shapes (e.g., transformers). Once labeled, the recognition process reduced the time from two weeks to just one day.

From 14 days to 1 day, thats impressive.

The recognition system wasn’t perfect, but it allowed users to visually verify and fix shapes in just one day, making them very happy with the product. At one point, I took a sabbatical year due to personal issues, stepping away from programming entirely. I used my laptop only for news and movies, which, looking back, was surprising given my passion for coding. I had progressed from developer to architect but always stayed hands-on, preferring to code proofs of concept rather than moving into pure management.

After my break, I felt the urge to code again and was introduced to Rust by a friend who was developing an audio editing app. He switched from C to Rust for its memory safety, low latency, and performance. Rust’s automatic memory management eliminates issues like use-after-free and dangling pointers, which account for a significant portion of software vulnerabilities. It also prevents data races, making concurrent programming safer. While Rust allows unsafe code for low-level tasks, it’s well-documented and carefully reviewed.

My friend’s experience with Rust was positive—his code became cleaner, shorter, and easier to maintain. He faced challenges with mutable references but solved them using channels, which are often cleaner than mutexes for concurrency. Rust isn’t easy to learn, especially for beginners, but its safety and performance make it a powerful choice for modern development.

What was your first experience with Rust?

My first experience with Rust came when a colleague sent me a link to a video from a channel called Let’s Get Rusty. The video demonstrated implementing a state pattern in Rust, which was different from the classical approach. In traditional object-oriented languages, you’d have a common interface with methods for each state, leading to potential issues like calling invalid methods (e.g., listing passwords in a locked state). In Rust, the solution was cleaner: each state was a separate struct with specific methods, and transitions returned a new struct representing the new state. This ensured type safety, preventing invalid method calls entirely.

This approach highlighted Rust’s attention to detail and type-oriented design, making it harder to write incorrect code. While similar patterns can be implemented in other languages, Rust’s ownership model and borrow checker enforce memory safety without a garbage collector. The compiler acts like a “garbage collector” by managing memory through strict rules, like ownership and borrowing, which were initially challenging to grasp but ultimately made the language powerful and safe. Despite my 20+ years of experience with languages like Java, C++, and Python, Rust was the hardest to learn due to its unique concepts, but its focus on safety and performance made it worth the effort.

Rust’s ownership model ensures only one owner for a variable, like a function or struct. When the owner is deallocated, the variable is dropped, whether it’s on the stack or heap. This is similar to RAII in C++. Rust’s borrow checker enforces rules on references, allowing multiple immutable references or one mutable reference, akin to a read-write lock. Lifetimes, represented as sets, ensure references don’t outlive their data. These features—ownership, borrowing, and lifetimes—act like a compile-time garbage collector, where you manually manage memory safety. While challenging, they make Rust powerful and safe. Many find these concepts difficult, but understanding them as parts of a “garbage collector” you help build can simplify learning. Despite its steep learning curve, Rust’s design ensures robust and efficient code.

I really want something like that. And I didn’t mind it was hard.

I actually like it. It was hard because You know, Kennedy said when they go to the moon, we didn’t choose to go to the moon because it’s easy, we go we do it because it’s hard.

And I’m never afraid of a challenge, I like challenges a lot. The satisfaction is also directly proportional.

What was your first Rust project? Where did you start?

I started learning from the Rust book, but I always need to combine reading with coding to stay motivated. I can’t just read books. I thought, “Okay, I need a project to keep me going.” I had some ideas, so I asked ChatGPT for suggestions. It started with a to-do list, and I said, “Come on, not another to-do list—the world doesn’t need that.” Then it suggested a password manager, which was decent, and finally a distributed file system. I thought, “That’s interesting, but maybe too hard for a learning project.” Ironically, I’m now tackling the mysteries of building a distributed file system for learning.

The idea came from a practical need: I had a local folder with project-related data—not sensitive credentials, but things like requirements and test cases. While not catastrophic if exposed, I’d prefer they stay private. I wanted to access these documents from multiple locations. I initially used BTSync (now Resilio), a peer-to-peer sync tool by BitTorrent’s creator. The downside was needing a device always on to fetch new data, which was problematic. I considered Google Drive or Dropbox, but even though I mostly trust them, there’s always a risk of hacking.

I thought an encrypted solution would be ideal, both useful for me and an interesting project. Building from a personal need often leads to the best projects—if it solves your problem and others share it, it could become a good product. If it’s a costly problem, even better. It turned out to be a great learning experience, and I’ve gained a lot from it.

I continued working on the project and considered monetizing it, but realized it was too much work for one person. Hiring freelancers wasn’t sustainable since the project wasn’t generating revenue. So, I posted on Rust forums and Discord groups, pitching it as an open-source learning project. The response was great—we now have a community of around 68 people. Some joined after I presented at a university in Bucharest, where students were encouraged to contribute to open-source projects for real-world experience.

Initially, I was concerned because none of the students knew Rust or cryptography, and they had only a few weeks to learn both while understanding the project and their specific tasks. The professor assured me they’d manage, likely with help from ChatGPT. I offered to guide them, and while not all performed well, seven submitted pull requests that were merged. A few even stayed as active contributors.

Currently, we have about 10-15 active contributors, including developers, DevOps experts, QA engineers, and UI/UX designers, as we’re building both desktop and mobile applications. Their contributions have been invaluable—saving me a lot of work and bringing fresh perspectives. For example, one contributor wrote a brilliant bitwise operation that took me a while to understand, even with ChatGPT’s help. He explained that it took him hours to figure it out initially, which made me appreciate his effort even more.

We’re also using the secrecy crate to ensure sensitive data like credentials and encryption buffers are securely zeroized. It’s been a rewarding experience, and the community’s growth and contributions have made the project far more impactful than I initially imagined.

To prevent sensitive data from being moved to an unencrypted swap file, operating systems use a syscall called mlock to lock memory pages. However, mlock works on entire pages, not individual variables. If a variable spans multiple pages, you need to lock all relevant pages. One contributor wrote a clever bitwise operation to calculate the start and end pages based on memory addresses. For example, if a page size is 4096 (a power of two), you can use bitwise operations to determine the page index. By subtracting one from the page size, you create a mask of ones, and using a bitwise AND with the memory address, you isolate the page number. This avoids unnecessary calculations and is highly efficient.

This contributor also worked on a distributed key-value store project, using a masterless architecture. Unlike traditional systems with a single master handling writes (which can bottleneck under heavy load), his approach avoids a single point of failure by distributing both data and processing across nodes. This sharding of compute and data improves scalability and performance, especially under high throughput. Learning from contributors like this has been incredibly valuable, as they bring innovative solutions and deep expertise to the project.

Distributed systems often face challenges in processing data across multiple nodes in parallel. Two key concepts in this area are Conflict-Free Replicated Data Types (CRDTs) and Operational Transform (OT). OT, used in tools like Google Docs, relies on operations like insert, move, and delete to ensure consistency across users, regardless of the order in which changes are applied. CRDTs, on the other hand, handle data structures like grow-only sets, where only additions are allowed, making them easy to distribute. However, when removals are introduced, maintaining consistency becomes more complex, requiring careful design and trade-offs.

A fundamental trade-off in distributed systems is captured by the CAP theorem, which states that in the presence of network partitions (P), you must choose between consistency (C) and availability (A). For example, to avoid split-brain scenarios (where two partitions elect separate masters), systems often require an odd number of nodes and ensure that only the majority partition remains active. This maintains consistency but sacrifices availability for the smaller partition. Alternatively, prioritizing availability can lead to inconsistencies, such as duplicate files in different partitions. Balancing these trade-offs is a core challenge in distributed system design.

In distributed systems, some solutions allow temporary inconsistencies, relying on eventual consistency. This means systems can be out of sync for a short period but will eventually converge to the same state. When partitions rejoin, conflicts are resolved through merging. For example, in a file system, if two partitions create files with the same name, the system might prioritize the most recent change or merge content if changes occur on different lines. However, conflicts on the same line or differing content require choosing one version, often the most recent. This highlights the inherent trade-offs in the CAP theorem: during network partitions (P), you must choose between consistency (C) and availability (A), as there’s no perfect solution for all scenarios.

How has RENFS or your distributed file system tackled that? And what crates have you used to build it? And how is it like architecture right now?

The renfs project focuses on encryption at rest but doesn’t handle distributed synchronization. For that, we have the RFS project, a Rust-based distributed file system. While I initially leaned toward consistency for file systems, there are cases where availability might be preferable, such as storing non-critical data like blog posts or sensor readings for IoT devices. For example, minor inconsistencies in temperature averages (e.g., 2.1 vs. 2.2) or text variations in training data for LLMs might not be critical. To address this, RFS allows configuration at setup, letting users choose between consistency and availability, even at the file or folder level. By default, it falls back to consistency but offers flexibility.

Sharding is another key feature. Files are split into chunks distributed across nodes. The simplest sharding algorithm divides chunks by the number of nodes, but this breaks when nodes are added or removed, requiring significant data movement. A better approach is consistent hashing, which maps nodes and files onto a ring using hashes. Each file’s hash points to a location on the ring, and the closest node (clockwise) handles it. This minimizes data redistribution when nodes change, making the system more scalable and resilient.

Consistent hashing distributes data evenly across nodes by mapping them onto a ring using hashes. However, uneven distribution can occur if hashes cluster in one area. To address this, virtual nodes are created by appending suffixes (e.g., 1, 2, 3) to node identifiers before hashing. This ensures a more uniform distribution. When a node goes down or is added, only the affected data is redistributed, minimizing movement. Another approach is key range sharding, used by MongoDB, where data is split into ranges (e.g., 0–max int) assigned to nodes. Adding a node splits a range, and removing one merges ranges, with data copied between nodes as needed. This method is efficient and requires fewer calculations than consistent hashing.

In our RENFS project, we’ve implemented consistent hashing and other sharding algorithms, though the full file distribution logic isn’t complete yet. For data replication, we’re considering BitTorrent, which can read from multiple nodes simultaneously. However, BitTorrent typically uses TCP/IP, and we prefer UDP. We’re exploring QUIC, the protocol behind HTTP/3, for its efficiency and reliability over UDP. This flexibility in configuration and choice of algorithms allows us to tailor the system to specific needs, balancing performance and scalability.

We explored using QUIC for data transfer in our distributed file system and found the Quinn Crate and iROC Crate as potential solutions.

iROC, similar to IPFS but more focused on data transfer, introduced the concept of “blobs” for file emulation. We also considered integrating a popular Rust BitTorrent client, RQ bit, and adding QUIC support. However, the owner of the RQ bit repository pointed us to µTP (Micro Transport Protocol), a lightweight, UDP-based protocol optimized for torrents. µTP offers features like retries, congestion control, and packet ordering, making it suitable for our needs.

One contributor took on the challenge of implementing UDP from scratch in an async manner, as no existing async libraries were available. His implementation is functional in a development version of RQ bit and resolves an issue where modern BitTorrent clients default to UDP without falling back to TCP. This contribution not only enhances our project but also benefits the broader Rust ecosystem, as we plan to offer it as a separate crate for others to use. This collaborative effort highlights the importance of community-driven development and avoiding reinventing the wheel. We will contribute the patches upstream.

Links:
https://crates.io/crates/quinn

That’s super nice, If you’re programming something that’s useful or you get like a nice feature for something you for it, Definitely go upstream with it like try to get it into the main repo then you will help like so many future developers

The community will benefit from it.

What is your favourite Rust Crates? Do you have any hidden gems to share?

Yeah, there are some incredibly popular and well-designed crates in the Rust ecosystem. For example, I really like clap, the command-line argument parser—it’s fantastic for building CLI tools. Since I’m working on a photography-related solution, I also rely heavily on the Rust Crypto collection of crates, which is excellent for cryptographic operations. These tools are popular for a reason—they’re well-built, reliable, and make development much smoother.

I’m currently using ring for cryptographic operations, as it’s widely adopted and reliable. Interestingly, ring originates from BoringSSL, which itself is a fork of OpenSSL. The library is a mix of assembly code (around 29%), C, and Rust. While this makes it highly optimized, it also introduces platform-specific challenges, especially with the assembly code. For example, I’ve encountered issues when building on certain platforms due to these dependencies. Despite this, ring remains a solid choice for cryptography in Rust, thanks to its performance and strong backing from contributors, including some from the BoringSSL team.

Links:
https://crates.io/crates/clap
https://github.com/RustCrypto
https://crates.io/crates/ring
https://en.wikipedia.org/wiki/OpenSSL
https://github.com/google/boringssl/blob/master/README.md

The assembly code in ring is platform-specific, which is why it’s so performant. I use ring because it’s significantly faster than alternatives like Rust Crypto. On my laptop, which isn’t high-end, ring achieves speeds of around 2.3 gigabytes per second, while Rust Crypto only manages about 700 megabytes per second—more than three times slower. That’s a huge difference, especially for performance-critical tasks like cryptography. This is why I stick with ring, despite its platform-specific challenges.

The performance difference also depends on the cipher and hardware. For example, I support AES (Advanced Encryption Standard) and ChaCha20. AES is older and widely supported, even by governments and NIST, with hardware optimizations on many CPUs, like those from Intel and AMD. These optimizations make AES extremely fast on modern hardware. ChaCha20, on the other hand, is a newer alternative that performs well even without hardware acceleration. The choice of cipher and its implementation can significantly impact performance, which is why ring’s optimized assembly code makes such a difference.

Hardware optimized solutions for it basically so you can like you can use the hashing functions you can use the block ciphers like really really fast because they have written it in assembly which is like yeah the fastest of the fastest. What’s really cool is the development around RISC-V processors. Unlike proprietary instruction sets like Intel or Power, RISC-V is an open-source, reduced instruction set architecture. There’s a lot of exciting work happening with hardware optimizations for cryptography on RISC-V, which is super interesting. For example, projects like the BLAKE3 hashing algorithm are being optimized for RISC-V. It’s a fascinating area because while hardware is being built to enhance crypto performance, there’s also hardware being developed to break it. It’s a double-edged sword, but the innovation in open-source hardware like RISC-V is definitely worth keeping an eye on.


https://en.wikipedia.org/wiki/RISC-V

Yeah, so this is harder optimization are quite nice. And by the way, I think risc-v is using in our processors also.

But if you’re a developer out there, you should definitely use the ring libraries and you use like audited and tested cryptography libraries and try not to run your own Cypher, at least for production stuff.

Yeah, so this is this is for sure. so the I’m very proud, these ideologies, so not not to run your your own site for production stuff, but I’m very also, ah I also like to learn about them and to write your own to understand about them.

I didn’t write my own crypto, of course—using existing libraries is much safer. However, even with these libraries, you can still misuse them. For example, libsodium provides a secret box API that only exposes encrypt and decrypt functions, limiting the risk of misuse. In contrast, libraries like ring and Rust Crypto offer more flexibility with methods like seal_in_place and open_in_place, but they require you to handle details like providing a nonce. A nonce (number used once) is critical for most ciphers—the combination of key, nonce, and message must be unique. If you reuse a nonce, it can lead to catastrophic failures, potentially exposing parts of the message or even the key. This is why it’s crucial to use these libraries correctly and understand the underlying principles.

Yeah, the nouns is like really key in in mixing the bytes and making sure like it actually stays encrypted.

Actually, the nouns is the key. Like a joke because, It’s very important. and ah The idea is that you you could use these rings around Rust crypto, ah well-audited ah ciphers, but you could use the same nouns and you you failed the the encryption.

A cipher alone isn’t enough for secure encryption. For example, if you encrypt a random number, an attacker can modify parts of the ciphertext. When you decrypt it, the result will be incorrect, potentially breaking your logic. For instance, if you’re reading a status from an HTTP call and expect a specific value (like zero), an attacker could ensure you never get that value, disrupting your application. To address this, encryption systems often combine ciphers with additional mechanisms. For example, ChaCha20, derived from Salsa20, is often paired with a message authentication code (MAC) to ensure data integrity. This way, even if an attacker modifies the ciphertext, the system can detect the tampering and reject the invalid data. This layered approach is crucial for building secure systems.

Like Chacha 20, actually it is derived from Salsa.

I think i think that guy really like dance courses. Salsa Chacha, maybe he’s from Spanish or something or Latin America as American.

It’s funny—there’s a bit of a theme with Java and Indonesia. For example, Java is named after the Indonesian island, and there’s also the Jakarta framework, named after Indonesia’s capital. It seems like the creators of Java had a soft spot for Indonesia! It’s a quirky little connection in the world of programming.

Links:
https://en.wikipedia.org/wiki/Jakarta_EE
https://en.wikipedia.org/wiki/Salsa20#ChaCha_variant

There’s also the Lombok library, named after another Indonesian island. It seems like the Java developers really enjoyed their time in Indonesia! Even the Tomcat framework has a reference to Catalina, likely inspired by someone or something they encountered there. It’s amusing how they turned their holiday memories into programming names. As the old joke goes, two of the hardest problems in programming are naming things and cache invalidation—so maybe drawing inspiration from vacations isn’t such a bad idea after all!

In Java, the ChaCha20 cipher is often paired with message signing to ensure data integrity. This involves creating a Message Authentication Code (MAC), which is a cryptographic checksum generated using a key. A simple hash of the message isn’t enough because an attacker could forge it. Instead, HMAC (Hash-based Message Authentication Code) is used, which combines the message with a key to produce the MAC. This ensures that only someone with the key can generate or verify the MAC, authenticating the message’s origin and integrity. HMAC works in multiple steps, generating intermediate keys and hashing parts of the message to create a secure code. This layered approach is essential for protecting against tampering and ensuring secure communication.

The HMAC process involves multiple steps: hashing parts of the message, encrypting the hash with a key, and repeating the process to produce a secure authentication code. This ensures the message’s integrity and authenticity. For encryption, AES often uses GCM (Galois/Counter Mode), while ChaCha20 pairs with Poly1305, a polynomial function operating in a specific mathematical space. Poly1305 generates a 12-byte tag from the ciphertext, which is appended to the message along with the nonce for decryption. Initially, I wondered if the nonce should be private, but encrypting it doesn’t significantly enhance security—it’s like applying sunscreen twice; the protection doesn’t double. Similarly, security measures must be proportional and well-designed, just like the filters in sunscreen or microwave grids, which block specific wavelengths effectively.

Applying the same security measure twice, like encrypting a password or using sunscreen with the same SPF, doesn’t double the protection. For example, if you encrypt a password twice, it doesn’t significantly increase security because the underlying vulnerabilities remain. However, using a key derivation function (KDF) like Argon2 can enhance security by hashing the password thousands of times (e.g., 60,000 iterations). This makes brute-forcing much harder, as each attempt requires significantly more computational effort. While modern CPUs can hash quickly, the cumulative effect of multiple iterations slows down attacks, making it a recommended practice for securing passwords and encryption keys.

Yeah, I mean the goal basically being that you wanted to make it so like power and resource heavy that the attacker will just give up and like stop trying to figure out your password a bit.

I think I read somewhere, there were the there was some calculation on the AS-128, I think, and the they were calculating ah ah compared to the cost of EC2 instance, and they calculated how many EC2 instance you need to take. It was CPU, okay, and GPU maybe is less expensive, but the idea was that they were calculating ah how much it will cost you know to to break a 128 and it was like in the order of hundreds hundreds of trillions of dollars and it it it was taking like I don’t know I think hundreds of years or tens of years or something like that okay you could you could spend like I don’t know millions of trillions to reduce it to tens of years.

In cryptography, authenticated encryption (AEAD) is crucial because it ensures both confidentiality and integrity. The additional data (AD) in AEAD can include metadata like file types or chunk indices, which helps detect tampering. For example, when encrypting a file split into chunks, adding the chunk index to the AD prevents attackers from rearranging chunks unnoticed. Similarly, using a unique file ID in the hash ensures chunks can’t be moved between files. This approach also catches errors from hardware failures. Tools like BLAKE3 and Merkle trees further enhance integrity verification, with Merkle trees even used in BitTorrent for syncing chunks.

As for Rust, learning it can be challenging, especially with concepts like lifetimes and the borrow checker. To simplify, I often use Arc and Mutex for thread safety, trading a bit of performance for ease of use. This approach avoids lifetime issues and makes development smoother. If you’re aiming for maximum performance, you could write assembly, but that’s overkill for most projects. It’s about balancing efficiency and practicality, much like the problem-solving in The Martian—sometimes you need creative solutions to survive!

In The Martian Movie, the protagonist uses creative problem-solving to survive, like patching a lander’s binary code via Morse code. Similarly, learning Rust felt like a survival challenge at times, especially with concepts like lifetimes and the borrow checker. I even experienced imposter syndrome, doubting my abilities despite my experience. But reflecting on my past work and feedback from others helped me realize these doubts were just in my mind. Like Steve Jobs, who turned setbacks (like being fired from Apple) into opportunities (like leading Pixar), I pushed through the tough phase. Now, I focus on what I can control—my mindset—and use tools like Arc and Mutex to simplify Rust development, balancing performance and practicality. It’s a reminder that challenges, whether in coding or life, can lead to growth and success.

Steve Jobs bought stock in Pixar, right? He bought some of the stocks up and then he had a board seat or something like that.

Yeah, maybe it’s both the big bosses, from what I understand. Actually, he came back to Apple, he built like a, I think next computer or something like that is called, so he built like a micro computer.

Steve Jobs famously spoke about how seemingly disconnected events in his life—like dropping out of college, taking a calligraphy course, and being fired from Apple—eventually connected to shape his success. For example, his calligraphy knowledge influenced the beautiful fonts on the Macintosh, which later became industry standards. Similarly, learning Rust felt overwhelming at first, but pushing through the challenges transformed my career. It gave me professional independence, connected me with vibrant communities, and opened new opportunities. Looking back, the struggles made sense, even if they didn’t at the time. As Jobs said, you can’t connect the dots looking forward, only backward. Having someone remind me that the dots would eventually connect would have been invaluable during those tough moments.

You have started a Rust course, right?

I want to be the mentor I never had by creating courses that not only teach Rust but also provide guidance and support to help others overcome challenges. Initially, I focused on the community around my project, but I realized the potential to expand this globally. Now, nearly 30 people from around the world have joined my Rust course, and I’m planning to expand into other areas like cryptography, algorithms, data structures, and even kernel development with eBPF. This has grown into a movement, with others in the community contributing to courses on DevOps, QA, and UI/UX design.

I’m a strong advocate for free and open-source education (FOSE), believing that knowledge should be accessible to everyone, just like open-source software. This initiative is about empowering others and building a collaborative learning environment.

You could say I’m a bit selfish—I give away free courses and resources because it fulfills me. Sharing knowledge and helping others learn is deeply satisfying, and seeing people grow and succeed because of it is incredibly rewarding. So, in a way, my “selfishness” drives me to contribute to free and open-source education, making learning accessible to everyone.

How can people get involved and take this course?

Sure! You can share the link to our Discord group. While I personally prefer Slack for work projects because of its practicality, I’ve noticed that Discord is much more popular, especially among teenagers and gamers. In fact, Discord originally grew out of the gaming community. It’s a great platform for building communities and engaging with learners, which is why we’ve adopted it for our courses and open-source projects. We’ll include the link in the show notes for anyone interested in joining!

https://discord.gg/e9R4m2rJQy

Throughout the course, we’ll build a fully functional project step by step, starting with basics like CLI parsing using clap and progressing to more advanced topics like concurrency with Tokio and exception handling. For example, we’ll create a web server, starting single-threaded and gradually adding features. By the end, participants will have a complete project and a clear understanding of how to build something from scratch. I’m still deciding on the final project—something involving networking and concurrency, like a chat client or a key manager synced across devices, would be ideal. I might even poll the community for ideas. The goal is to make the course practical, engaging, and free, ensuring it’s accessible and valuable to everyone.

To keep the community engaged and motivated, I organize weekly coding challenges. For example, this week’s challenge is to build a search engine library similar to Elasticsearch, supporting prefix, suffix, and wildcard queries on large datasets like Wikipedia or thousands of books. I provide hints, like using tree structures for efficient prefix queries or reversing text for suffix queries. These challenges help participants learn practical skills while building something meaningful. Rust’s memory efficiency makes it a great choice for such tasks, especially compared to Java-based solutions like Lucene (used in Elasticsearch). The goal is to foster learning, collaboration, and creativity while having fun.

I just wanted to I mean I wanted to say these two For related to Elasticsearch. So when I learned Elasticsearch, I read the story ah by the way, do you know how rust appeared?

How did Rust Appear?

Here’s a fun story: Elasticsearch started as a personal project when a developer’s wife asked for a recipe website to search and print recipes. He used Lucene for search but added distributed capabilities and a REST API, making it developer-friendly. The project grew so much that he left his job to focus on it, and now Elasticsearch is a major tool—though the recipe website was never finished!

Similarly, Rust began when a Mozilla developer, frustrated by a broken elevator, decided to create a safer, more reliable programming language to address C’s shortcomings. What started as a side project grew into a major language, eventually supported by the Rust Foundation. Both stories show how personal frustrations and small ideas can lead to groundbreaking innovations!

Wasn’t it involved in the Servo project from Mozilla which was like ah an effort to rewrite Firefox to be memory safe?

Servo:
https://en.wikipedia.org/wiki/Servo_%28software%29

I don’t know that, but it will make a lot of sense.

Do you have any advice for new people that want to play around with Rust, where should they start?

I think definitely you should you should start with the Rust book, which is a very good resource, but after a Few chapters, let’s say. Try to find a personal project that you like. Just think of a problem that you have. Doesn’t need to be big like, I need to solve a problem that many people have.

No, just think of a problem that you would like the software for it. It doesn’t matter that it exists because it will be a learning experience for it. Like the challenge you have. Of course you have Elasticsearch, but the idea is to learn about it and Just building it. And at the end of the book, you’ll have, like I said, you have working project first. And it doesn’t matter if it’s, a I don’t know, practical or something, you can put it on your resume.

Actually, my previous job, the the employer found my rank on GitHub, and he had to collaborate, and he actually hired me with a very high salary without any interview.

Because he said, I look i look at your commits, and it’s enough for me. And, yeah, it’s, this is great, you know, yeah you yeah just use that alone again.

That is awesome. It can lead to so many more opportunities. It might be some new job opportunities or just collaborating or just learning. It’s amazing.

About Rust, no matter how difficult it is, just think as one example in your life that it was difficult and you didn’t stay there and just die.

Just, rise, fall, then you cross. the one This is what makes you more resilient in the end. and this is you You need to do this. This is this is life. fail Fail again, but fail better.

audio-link: #

Listen to the full interview here

https://www.youtube.com/watch?v=CNQchXK9Vh0

https://github.com/radumarias/

https://docs.google.com/document/d/1Ru5UlOz-4dz9ors3FKEg2Ac-KxKVqI_wR0EitECp-mo/edit?usp=sharing https://github.com/radumarias/rfs https://github.com/radumarias/genie-do https://raw.githubusercontent.com/radumarias/rencrypt-python/refs/heads/main/resources/charts/encrypt.png https://pypistats.org/packages/rencrypt https://en.wikipedia.org/wiki/CAP_theorem
https://github.com/quinn-rs/quinn