This Looks Like Why FireFox Is A Pig

I’ve been exploring self hosted (native) builds of Linux on ARM boards (R. Pi M3, Rock*, Odroid *). “It’s complicated” in a lot of ways; mostly that The Usual Way is all on Intel machines as a cross compilation.

I find this odd, as the ARM chips of today are vastly faster than the old 486 machines on which Linux was compiled when I first started playing with it. It seems that the developers want system build times measured in small minutes and are no longer willing to accept hours. But it is what it is.

Along the way, I found that there are many build methods and build processes. Debian different significantly from Devuan that differs even more from Gentoo Linux.

Devuan has their own “sdk” (Software Development Kit) and a special variation on it for the arm called arm-sdk. It is based on an odd shell (zsh) and expects to be run on Intel based equipment. I’ve dug through it and it is under documented and over complicated for a “simple build system”. I’ve not completely given up on it, but it would take a significant re-write (and I’d dump zsh…) to make it work as a native build on ARM boards.

Debian largely expects to do install of binary packages from repositories. It has a method to do compiled packages, but it is based on setting up a mini-system in a chroot and then doing the bootstrap. A chroot is a reasonably good way to go, and then building all the bits in it can work, except that is has complications that are not needed for a simple compile. It does help to keep the libraries right & matched to the new build while compiling on an older build machines.

Gentoo is in some ways the easiest. It ships as a “tarball”. It is almost always made from sources compiled on your target machine (though you can get binary packages of it that folks have built for you.) It does come in both SystemD and OpenRC versions and I believe the default is free of SystemD and using OpenRC. I’ve built the “userland” of Gentoo on an ARM board without issue, though it took about 4 hours (and it was on a fast ARM board).

So I went looking into Gentoo native builds on an ARM based board. I ran into this page about building using the Raspberry Pi. Interestingly, the big issues isn’t CPU speed, but memory, and it isn’t all the OS that causes the problems, but compiling FireFox! That is due to FireFox being written in a language named “rust”.

https://wiki.gentoo.org/wiki/User:NeddySeagoon/Pi3_Build_Root

Motivation

The driver to do this was Firefox-54. Firefox-54 has a hard dependency on rust. Rust comes with a bundled LLVM which adds to the burden. Rust has its own package manager, cargo, which needs rust to build.

A Raspberry Pi 3 with 1GB RAM will not build rust. No rust, no cargo, no Firefox-54 or later on a Raspberry Pi 3 or other similar aarch64 systems.

LLVM is a compile tools system that started life for use with the C compliler Clang as an alternative to gcc.

https://en.wikipedia.org/wiki/LLVM

The LLVM compiler infrastructure project is a “collection of modular and reusable compiler and toolchain technologies” used to develop compiler front ends and back ends.

LLVM is written in C++ and is designed for compile-time, link-time, run-time, and “idle-time” optimization of programs written in arbitrary programming languages.

So basically FireFox brings with it it’s own language (Rust) and it’s own compiler back end (LLVM) and it’s own package manager (Cargo)… That makes for one fat system.

Looking into Rust:

https://en.wikipedia.org/wiki/Rust_(programming_language)

Rust is a systems programming language with a focus on safety, especially safe concurrency, supporting both functional and imperative paradigms. Rust is syntactically similar to C++, but its designers intend it to provide better memory safety while still maintaining performance.

Rust was originally designed by Graydon Hoare at Mozilla Research, with contributions from Dave Herman, Brendan Eich, and many others. Its designers have refined the language through the experiences of writing the Servo web browser layout engine and the Rust compiler. The compiler is free and open-source software, dual-licensed under the MIT License and Apache License 2.0.

So Yet Another Object Oriented somewhat fat language. Sigh. O.O. is an ok way to program, but it is very easy to make things that are too fat and too slow. The whole point of C is that you can get right down close to the hardware and be really really fast when you need it.

Memory management

Rust does not use an automated garbage collection system
like those used by Go, Java, or the .NET Framework. Instead, memory and other resources are managed through resource acquisition is initialization (RAII), with optional reference counting. Rust provides deterministic management of resources, with very low overhead. Rust also favors stack allocation of values and does not perform implicit boxing.

There is also a concept of references (using the & symbol), which do not involve run-time reference counting. The safety of using such pointers is verified at compile time by the borrow checker, preventing dangling pointers and other forms of undefined behavior.

So, to get FireFox to compile, you build a compiler, build a language, and then compile a too fat browser in it while not having inherent garbage collection so released memory just stays used unless you think about it and clear it. Gak!

But wait, there’s more!

Firefox >=54.0

First we need rust.
Rust

Unfortunately rust needs rust to build.
That’s just like icedtea and gcc. To make that work the ebuild must be patched. The aarch64 support is there upstream, its just not included in the in tree ebuild.

Copy the ebuild to your overlay and apply the following patch.
[…patch left out…]

So somehow you need to essentially “cross compile” Rust from some system that already has Rust into the chroot where you will then use Rust to build FireFox.

Then you get to rinse and repeat for Cargo:

Cargo

Cargo is much the same. Aarch64 support is available. However, the 2016-09-01 nghtly snapshot, which is what the ebuild would fetch, segfaults in the QEMU chroot. The 2016-11-28 version seems to build.

Copy the ebuild to your overlay and apply the following patch.
[…patch left out…]

Then they speculated FFox might build on the Pi, with those two already built, but added a note that no, it doesn’t… Even if you use distcc on a cross compiler basis to farm out some of the compiles to remote Intel based boxes using a cross compiler for some of the work.

Firefox-54

With rust and cargo in packages, firefox itself might build on the Pi with help from cross distcc. (it won’t, not even at -j1).

Rust is profile masked on arm64 but its a hard dependency for Firefox-54.

Continue to build firefox in the QEMU chroot

If the build fails with

OK, Note To Self:

Find a FireFox derivative from pre-rust days and use that as browser of choice in builds for the ARM that are self hosted… or just use de-Googled Chromium…

In Short:

FireFox has become a big fat pig of a system that has left K.I.S.S. so far behind it’s now an entire build system and language library with package system. What a foolish approach. In the article it had chroot build times of 7 hours for rust, a 1/2 hour for cargo, and 3 hours for FireFox. Over 10 hours just to compile a browser despite some of it being built on other machines. That’s crazy.

So, OK, now I know not to “go there”.

I’m fairly sure that the 2 GB of memory on the RockPro64 or the Odroid would be enough to make a native build possible (if not, they make a 4 GB board). But really, what kind of special idiocy requires that much memory to compile One Browser! More than the rest of the operating system… That’s what you get when you go to FruFru Object Oriented stuff… and then write crappy code in them, and then don’t bother to garbage collect…

I have to wonder if the need for massive memory to compile rust and compile FireFox is just due to rust being written in rust so compiling it will be subject to the same lack of garbage collection just like compiling FireFox with the rust compiler will lack garbage collection.

IMHO, that is most likely the critical error. The language doesn’t do any inherent garbage collection of freed memory and the programmers are not paying attention to that housekeeping either.

Subscribe to feed

Advertisements

About E.M.Smith

A technical managerial sort interested in things from Stonehenge to computer science. My present "hot buttons' are the mythology of Climate Change and ancient metrology; but things change...
This entry was posted in Tech Bits and tagged , , , , , , , , , , . Bookmark the permalink.

34 Responses to This Looks Like Why FireFox Is A Pig

  1. tom0mason says:

    Well you may be interested in Netsurf (http://www.netsurf-browser.org/)

    From a modern monster PC to a humble 30MHz ARM 6 computer with 16MB of RAM, the web browser will keep you surfing the web whatever your system. Originally written for computer hardware normally found in PDAs, cable TV boxes, mobile phones and other hand-held gadgets, NetSurf is compact and low maintenance by design.

    It runs quite well on my Linux box, seems to be fairly resilient with no crashes to date, but will occasionally balk on at some websites (though none recently).
    Good luck with all your myriad ARM projects. :-)

  2. meckeng says:

    what about iceweasel…linux derivative of firefox? Looked at it?

  3. E.M.Smith says:

    @Tom0Mason:

    Thanks!

    I think I’m settling in on a dual approach. Devuan as released for those boards where it’s been made. For those where there is no Devuan release, a “from sources” Gentoo build.

    The use of Armbian -> Devnuan via a conversion (“uplift”) is OK on those where I’ve used it, but it’s just a bit kludgey. Three levels deep of “upstream” (Devuan <- Armbian <- Debian) and at least on the Odroid XU4, it has a couple of bugs with the X display (cursor doesn't always show up at boot until you click an application / window open). The Armbian folks do a good job of making ARM work, but they are not very focused on the audio & visual stuff (being mostly "embedded" guys).

    Gentoo generally works on all arch types, but can be a bit slow to embrace a new board / type – so "some assembly required". For example, right now their arm64 'tarball' is marked as somewhat new and experimental. Not mainline… So that's why I'm not fully embracing it without reservations. It has the benefit of being SystemD Free "out of the can"; but not going to be first on odd hot new boards. So I'll likely need to get the .dtb Device Tree Blobs (modules) from some other build and potentially compile the kernel myself

    We'll wee, it's still in the ponder stage… but I like the way they have a defined distcc build process already :-)

    I'm going to do at least one "from scratch" Gentoo just to get experienced at it and see if it's something I want to deal with in an ongoing way. As I'm likely done buying boards for a year or two, it's also possible I could end up just sticking with Devuan and the Armbian/Devuan conversion. It has the virtues of being easy and being the same administration style (Debian apt-get & etc.)

    It's pretty much just:

    1) Build and configure u-boot – dd it onto the start of the SD card.
    2) Collect the binary blobs needed by the OS to boot – dd it onto the next slot of the SD card.
    3) Make a boot partition (various choices) with boot options & kernel (copied or built).
    4) Make a "userland" with "all the usual packages and applications"
    5) dd #3 & #4 onto the SD card right after #2 and where #1 expects to find them.

    Then boot, QA, debug, and repeat if needed.

    From my POV, the "fun bit" will be having that "built kernel" and / or "userland" made with distcc over the cluster of boards ;-)

    All that's a bit low priority though. I get to play with OS build stuff a few days month…

    Right now my major interest is getting a desktop without quirky issues that's reasonably fast and comfortable. The Odroid XU4 is fast enough, but still a bit of a kludge of an OS that I'm running. The RockPro64 is also more than fast enough, but the OS is "very young" and has some issues with video.

    So I live mostly on the XU4 and as time permits try some variations on the OS theme to see if something better is out there; and about 1/year try some other board(s).

    IMHO, Devuan 2.0 is pretty close to right. The only real issue there is the copy the put up on their download site for the XU4 is missing network drivers. I can either "roll my own", just wait for their next update that's likely months away, or try another OS supplier. Decisions decisions…

    So for now it's looking mostly like just "Live with the present XU4 set up" while "leaning to make one from scratch Genoo" if it can be done in a couple of days… then just living with wherever I've gotten to.

  4. E.M.Smith says:

    Oh, and since I now know FireFox is sick at a fundamental level, I can stop waiting for them to “fix the memory bug” and just get on with picking a different browser. I found SeaMonkey for the Mac(!) of all things. It looks like they made it mostly for Intel (x86 / amd64) so it may be that it doesn’t have an ARM port… Basically a security / privacy oriented older FFox / Thunderbird.

    I’ll give Netsurf a try on the ARM boards. They all come with Chromium (the open source Chrome) so that’s an option. It is just that I am habituated to FFox… So Iceweasel or Seamonkey would be most familiar. It will only become an issue when I go to do the build, and that’s likely a ways away and totally optional.

    So mostly I need to just take the time to find out which browsers are already running on arm64.

  5. tom0mason says:

    Many years ago… perhaps at leat 6 years ago, I tried Gentoo from source but due to being very busy (and making lots of mistakes) I gave up. You will probably find it a whole lot easier than me.
    Good luck with compiling a Chromium variant (Vavaldi Browser has a Debian ARM build), or Iceweasel, or Seamonkey.
    Seamonkey includes with the web browser and a tool to inspect the DOM for web pages, JavaScript debugger, an HTML editor, a mail/news client, address book, and IRC client. All of that makes it a very big package, over 100MB installed on my Linux box. Its latest version uses codebase from Firefox/52.0 (Mozilla/5.0) and I believe its Gecko 2010 rendering/layout engine (https://developer.mozilla.org/en-US/docs/Mozilla/Gecko ) which appears to be not fast but stable.

    Whichever route you take it sounds interesting, wish you luck and have fun building it.

  6. jim2 says:

    Vivaldi browser – haven’t tried it, but …

    https://vivaldi.com/blog/vivaldi-for-raspberry-pi/

  7. Eric Fithian says:

    Then there is the Brave browser (from Europe); I learned of it from a blurb on the Qwant page…
    I plan to try it when(ever) I get to actually *installing* the bright, shiny, Stable version 3.0 of eLive.
    Have to go to them to fetch it, as it doesn’t show up in the eLive repository….

  8. ssokolow says:

    It is based on an odd shell (zsh)

    There are much odder shells than zsh. Zsh is a superset of the Korn shell, just like bash is, and bash is more or less slowly turning into zsh as it copies features like ** path globs that first got proven to be desirable under zsh.

    That is due to FireFox being written in a language named “rust”.

    Incorrect. Firefox has been a very demanding thing to compile long before Rust came on the scene. In fact, Chrome is infamous for taking every thing you’ve complained about in Devuan and Firefox and then saying “Hold my beer”.

    As one example, firefox will build on ARM32 with 4GiB of RAM, Chrome requires at least 8GiB and highly recommends 16GiB.

    See also this lament over on Phoronix which was prompted by the news that MS Edge will be switching to using Chrome’s engine.

    So basically FireFox brings with it it’s own language (Rust) and it’s own compiler back end (LLVM)

    You need to research LLVM more. First, it came about because GCC was intentionally designed to be hostile to plugins, so people had to reinvent a competitor to GCC to make progress. GCC only developed a plugin system recently, in response to fear that they’d become irrelevant with so many people implementing their experimental improvements for LLVM.

    Second, regardless of Rust, Fedora just OKed Firefox to be built with LLVM Clang (the C/C++ frontend) to get better performance and faster compiles.

    (LLVM Clang is the official MacOS compiler, both Firefox and Chrome for Windows are built using LLVM Clang instead of Microsoft’s compiler, and official Firefox builds for Linux already use LLVM Clang instead of GCC to get better performance.)

    and it’s own package manager (Cargo)

    Just like how Chrome requires Python to build, which has “pip”. That aside, don’t think of Cargo as a package manager. Think of it as a build system that just happens to know how to download packages.

    Chrome’s build system is far worse… as in “would make your face melt if you pulled it out of the Ark of the Covenant” bad. Beyond that, for a while, their own build instructions didn’t actually work and, if I remember correctly, the build system was self-hosting. (Self-hosting compiler? Sure… because nobody wants to write and maintain a compiler in assembly language. Self-hosting build-system? Sheer idiocy.)

    So Yet Another Object Oriented somewhat fat language.

    No, actually. Until Rust came on the scene, all the major web browser engines were written in C++ and Rust is designed to allow more efficient memory structures than C++’s at the expense of not being as flexible in certain areas. (Sort of a middle-ground between C and C++ in that respect.)

    The whole point of C is that you can get right down close to the hardware and be really really fast when you need it.

    Likewise with Rust. That’s why it’s building steam as a serious challenger to C and C++.

    Rust just gives you some extra protections, so the compiler can catch things like double-free, use after free, buffer overruns, and so on. (The kinds of things that keep producing security bugs in browsers based on C++, even with budgets as bug as Google’s and Microsoft’s.)

    So, to get FireFox to compile, you build a compiler, build a language, and then compile a too fat browser in it while not having inherent garbage collection so released memory just stays used unless you think about it and clear it.

    You’ve got it backwards. Rust’s memory management is LIGHTER than garbage-collection. With garbage collection, memory sticks around until the garbage collector notices. With Rust’s memory management, the compiler identifies the last place the memory gets used and then inserts a call to free() after it, so memory gets freed more aggressively than with GC and without the weight of having a GC walk through the memory every so often to find unreachable bits.

    That’s what the “Instead, memory and other resources are managed through resource acquisition is initialization (RAII), with optional reference counting.” means in the passage you quoted.

    It’s basically what good C++ coders do, except the Rust compiler is explicitly designed to catch mistakes in using it.

    So somehow you need to essentially “cross compile” Rust from some system that already has Rust into the chroot where you will then use Rust to build FireFox.

    No different from GCC for C or C++, GHC for Haskell, PyPy for Python, or any other self-hosting language implementation.

    Find a FireFox derivative from pre-rust days and use that as browser of choice in builds for the ARM that are self hosted… or just use de-Googled Chromium…

    As I mentioned previously, it still won’t work if you do that because Rust wasn’t the cause of Firefox’s compile-time memory requirements and Chrome (which contains no Rust) requires at least twice as much RAM to compile as Firefox does.

    Over 10 hours just to compile a browser despite some of it being built on other machines. That’s crazy.

    There’s no “just to” about compiling a browser. A modern browser is essentially an operating system which runs on top of your operating system… and it’s not just browsers either. LibreOffice is also a big, heavy thing to compile and always has been. (I was using OpenOffice.org back when I was on Gentoo and it took forever too.)

  9. Ronnie says:

    I think you may have misunderstood Rust’s memory management. It’s not GC’d in the same way that C++ is not GC’d either, but uses RAII. It generally results in faster smaller code and run-times; and yes it is bare metal coding too

  10. gallopingcamel says:

    Askimet is giving me a hard time. I hope this gets through.

  11. gallopingcamel says:

    That’s better! Here is my comment:

    I was a happy, clueless Firefox fan until beset with problems streaming Netflix and saving bookmarks.

    So I jumped ship to Google Chrome. Have I jumped out of the frying pan and into the fire?

    Save me wise guru! I am like a damsel in distress or a mariner seeking a safe haven.

  12. Danilo says:

    A few notes about factual mistakes on Rust:

    – Rust is not an OO language
    – Rust does not require manual memory management, all memory is cleared as soon as it’s not needed anymore (you could call this compile-time garbage collection)
    – Self-hosting (bootstrapping) is *very* common for compilers
    – Have fun compiling Chrome on the Raspberry Pi, I don’t think that is possible with 1 GiB of RAM :) According to https://chromium.googlesource.com/chromium/src/+/HEAD/docs/linux_build_instructions.md#system-requirements you need: “A 64-bit Intel machine with at least 8GB of RAM. More than 16GB is highly recommended. At least 100GB of free disk space.”

    (Note: This post showed up in the Rust subreddit, so I hope my comment is not one of 7342235 comments waiting in the moderation queue :))

  13. E.M.Smith says:

    @Ronnie:

    That could very well be. The only RAIL I’m familiar with is the Robot Programming language:
    http://www.roboticsbible.com/rail-robot-programming-language.html
    so how whatever RAIL you refer to might work is a mystery to me. Feel free to post a link to the RAIL you are talking about and I’ll look it over.

    Per C++ not being garbage collected either, well, I’m not fond of C++ either as it, too, often makes fat programs that are memory hogs. While it is possible to write efficient small code in Object Oriented languages, it often is not done as the programmer must think about it and act appropriately, not just suck in vast libraries and use them in stacks of passed in modifications (i.e. not do what makes OO most useful – inheritance).

    How you can call something so big and fat that you can’t compile it in a GB of memory “small and fast” is a bit beyond me… but if you can show FireFox is smaller and faster than other browsers, go for it. Post the benchmarks. It could be that the compile is fat and slow so the code produced is faster and smaller.

    @Eric Fithian:

    I’ve got Brave on my tablet. I’m playing with it. Seems nice so far. Don’t know if there’s a tarball or ARM binaries… but I’ll get there.

    @GC:

    I use Chromium some of the time. Note that Chrome is the Google Supplied & Infested browser typically on Android that loves to send you to Google stuff like saving all your bookmarks where they can read them. Chromium is the open source version with less Google inside. It is typically found on Linux. There’s also a completely de-Googled version but I’ve forgotten the name at the moment.

    It is a fairly fast and effective browser and does not have the Fat Compile problem of FFox. I’ve seen it now showing up as the default browser ( as Chromium) on the SBC builds.

    It isn’t horrible,but it isn’t my favorite either. It does tend to work and not be as much of a memory hog (or maybe I just don’t keep as many tabs open in it… as I don’t use it as much).

    Other than the risk that Google might be sucking up more information about you and your usage, it’s not that bad a browser IMHO.

    @Jim2:

    I’ve used Midori a little bit, but somehow it just wasn’t compelling. I’ll likely give it a run again now that FFox is hitting a wall on ability to build on small hardware. I’ve got a Vivaldi download “somewhere” and was intending to give it a test drive too.. but “stuff happens” and priorities shift. Eventually I’ll get to it ;-)

    IIRC, Midori didn’t do a lot of things ‘fancy’, so it was a very simplified presentation of the page – video and graphics were different or missing. Then again, I last used it years ago when it was fairly new, so maybe that was more a limit of the state of development being ‘too young’.

    @All:

    This is really a 2-fold issue. I’m slowly working toward a source build of my own system native on small ARM SBCs. So part 1 is a browser must compile in that context. FireFox is “on the limit” as I do have 2 x 2 GB memory SBCs, so I could compile on one of them and them move it to the smaller memory boards. Maybe. If 2 GB is enough even though 1 GB isn’t.

    The other is a small fast browser that doesn’t end up swapping like crazy on a 1 GB machine. I’d really like it to NOT pre-load videos and only suck in various bits of stuff as you access / ask for them. That may require a special browser focused on data flow limiting, or it may be something where you must “roll your own” and modify an existing browser. Since mobile devices often have you paying “by the byte” I can’t help but believe others care about this too and maybe the builds for mobile devices are more memory and data flow efficient. I can’t see folks on a “2 GB plan” being happy with one page of Jerusalem news sucking up 400 MB of it…

    So at some point I’ll be looking at “what does it take to compile” some of the alternative browsers along with “how much data / memory does it suck?” and how nice is it in use.

    I’d thought that was a ways off, but since Slackware is running nicely on the RockPro64, and it is all source tarball packages, I’m likely to find out as I install various browsers on it (since it looks like i get to compile them all from tarballs anyway ;-)

    To some extent that will depend on what browsers are ported to ARM on Slackware and have a package available…

    @Tom0Mason:

    I may well need luck ;-)

    I liked iceweasel. IF I can find a tarball for it, it would be an early effort for me. Seamonkey is not high on my list – I like it, but like you point out it is a big wad of everything so a lot of work to make it right.. But it was nice to use. Kind of an IceWeasel + IceDove in a package.

    I’m hoping other folks get the various bits debugged on ARM first so it’s just “compile and go”, or that they are written clean enough to not have machine dependencies. I really really don’t want to get in the business of Browser Maintenance. OTOH, that “windowshade advert” being such a sucky thing, it may well be that the only way to kill it is to put code inside a DIY browser to detect and kill it.

    In summary:

    First step passed – found bigger hardware I like. Both the Odroid XU4 and the RockPro64 are very fast and with 2 GB or more of memory. Ought to be more than enough hardware for the things I want to do.

    Second step in progress – Source build of OS. I’ve done all the various parts of it, just not all in the same single build. Both Slackware and Gentoo are all source build products. I’ve installed both and one a Gentoo Userland build (and earlier a Slackware on R.Pi as well). Just need to “pick one Linux” to build end to end.

    Third step in ponder mode – Once I’m building the OS and browsers in it, pick one I like best for security, privacy, and easy building and make any custom mods I need in it. This isn’t likely as hard as it sounds as the GNU folks are of the same attitude on security and privacy (which was why I mentioned SeaMonkey / IceWeasel). Just using the OSF / GNU browser build is likely enough. Only real question there is how well it works / how memory fat is it? The rest they’ve done.

    What was a surprise for me was discovering that a 2 GB Mac can’t run FireFox for looking at more than a half dozen pages without it starting to swap like crazy. This not only burns the SSD in the Mac (it isn’t infinite lifetime – thus this Mac having the SSD die) but when I made it run from uSD card in the USB port, the sloth of uSD vs SSD is “not a problem” on any other program but when swap hits 1 GB it starts to be painfully slow. That’s my #1 motivation for looking at other browsers. Then I found you can’t even build FireFox in 1 GB? OMG!

    Slackware, on the RockPro64 with 64 bit words – i.e. 2 x as much memory / word as the 32 bit build so a fatter OS – uses all of 240 MB of memory. That’s the WHOLE operating system WITH X windows running. Everything. I’ve run the very old FireFox in Linux on a 64 MB memory machine. That wasn’t that many years ago. What’s changed? More “eye candy” for pages and browsers to deal with AND worse coding practices… It ought to be possible to comfortably browse pages and run videos in 512 MB of memory. It ought to be possible to completely build Linux with browser in that same 512 MB memory machine. The rest is code bloat.

  14. Alvin says:

    It’s RAII (https://en.wikipedia.org/wiki/Resource_acquisition_is_initialization), not RAIL. The whole point is to have memory freed up as soon as local variables go out of scope, or when the reference-counted pointers aren’t referenced anymore. One rarely needs to perform manual memory management with modern C++ and Rust.

    And your small rant on OOP is a bit out of place, considering that Rust actually doesn’t have traditional inheritance and has a rather different approach to OOP compared to most other “traditional” OOP languages.

  15. Ronnie says:

    Hi,

    Oh, I see, not RAIL , RAII (Resource Acquisition is Initialization)

    Ok so Rust is not an OO language, it doesn’t even support inheritance, interfaces, polymorphism and all that stuff you might use in C++, Java etc; it’s closer to Ocaml/Haskell in style than any procedural language and quite compact syntax. It uses linear/affine types for ownership so it’s a different memory model completely – single ownership and no aliasing.

    https://en.wikipedia.org/wiki/Substructural_type_system

    That also prevents data races at compile time so you can write really crazy multithreaded code which you don’t have to worry about crashing or corrupting memory.

    Anecdotally, we use Rust at work when we want highly multithreaded SIMD optimised code with low memory footprint. Obviously we aren’t writing a browser, so YMMV

    In my use Firefox is about 20-30% more memory efficient than Chrome for similar number of tabs and I see a visually noticeable performance improvement on the graphics heavy sites.

  16. E.M.Smith says:

    @Ronnie:

    Sorry, I’d seen the . at the end of RAII. as an L in my blearly morning pre-coffee reading ;-)

    @Ronnie, Alvin, Danillo, and ssokolow:

    Ssokolow has a long comment here:
    https://chiefio.wordpress.com/2018/12/10/this-looks-like-why-firefox-is-a-pig/#comment-105103
    that I found in the SPAM bin for no reason I could figure out.

    Per Rust and OO: The context lead me to believe it was OO like C++ is OO, but it has been asserted it isn’t an OO language. As I don’t write it I can’t say. So an appeal to (weak wiki) authority:

    https://en.wikipedia.org/wiki/Rust_(programming_language)

    However, the implementation of Rust generics is similar to the typical implementation of C++ templates: a separate copy of the code is generated for each instantiation. This is called monomorphization and contrasts with the type erasure scheme typically used in Java and Haskell. The benefit of monomorphization is optimized code for each specific use case; the drawback is increased compile time and size of the resulting binaries.

    The object system within Rust is based around implementations, traits and structured types. Implementations fulfill a role similar to that of classes within other languages, and are defined with the impl keyword.Inheritance and polymorphism are provided by traits; they allow methods to be defined and mixed in to implementations. Structured types are used to define fields. Implementations and traits cannot define fields themselves, and only traits can provide inheritance. Among other benefits, this prevents the diamond problem of multiple inheritance, as in C++. In other words, Rust supports interface inheritance, but replaces implementation inheritance with composition; see composition over inheritance.

    Sure looks like an OO language implementation to me…

    Making “increased compile time and size of resulting binaries” is not looking light and slim.

    Then RAII is also claiming OO:
    https://en.wikipedia.org/wiki/Resource_acquisition_is_initialization

    Resource acquisition is initialization (RAII) is a programming idiom used in several object-oriented languages to describe a particular language behavior. In RAII, holding a resource is a class invariant, and is tied to object lifetime: resource allocation (or acquisition) is done during object creation (specifically initialization), by the constructor, while resource deallocation (release) is done during object destruction (specifically finalization), by the destructor. Thus the resource is guaranteed to be held between when initialization finishes and finalization starts (holding the resources is a class invariant), and to be held only when the object is alive. Thus if there are no object leaks, there are no resource leaks.

    RAII is associated most prominently with C++ where it originated, but also D, Ada, Vala, and Rust.
    The technique was developed for exception-safe resource management in C++ during 1984–89, primarily by Bjarne Stroustrup and Andrew Koenig, and the term itself was coined by Stroustrup.

    Sure looking OO tied to me… Now the question is just when does FireFox bother to do “object destruction”? Clearly not soon enough to prevent enormous memory demands if one page of a newspaper can suck up 400 MB and swapping around a few (under 10) tabs can peg a 2 GB memory machine with swaps. (Realize NO other programs I run, including a complete compilation of Genoo Userland, does that, even on a 1 GB machine.)

    So look, Rust may well be the best thing since canned beer, and it may be better than C++, but as I’m not seeing C++ as particularly great shakes, that’s thin tea.

    Similarly claiming Chrome is worse to build is unconvincing. I’d only thought it might be worth a look due to it being already IN many ARM based distros. If it’s a bigger pig too, then we’ve got two pigs, not one pig and a swan.

    I’ll need to read up on RAII before I comment further on it. IF in fact it implements a better alternative to garbage collection and makes memory management better; then the necessary conclusion is that it’s the rest of the package that’s where things are screwed up. And yes, saying you need 4 GB to 8 GB of memory to build a package is screwed up.

    Per self hosting being common in compilers: Yes, I know. Doesn’t mean you ought to require it for your browser to compile. IMHO there ought to be a separation between systems tools builds and end user applications builds. Yes, it’s fun and trendy to use the Newest Latest Greatest Unique Odd Language – but it is a PITA to everyone else to deal with that choice.

    I understand there’s a bit of a bootstrap problem in that to build interest it must be used and to be used it must have built interest; so you start out in something a bit smaller and let it have some time to build interest. Often new languages spend a while written in an old language just to get past this stage. Jumping over that just to be self-hosted is not a bright move.

    To assert it’s ‘an operating system’ in it’s own right complete with its own languages and need for bootstrap compilation and more: that is just asserting the same things I’m complaining about – it’s a too fat pig that takes too much resources and is not designed as efficiently as desired.

    BTW: I’m NOT advocating gcc as the be-all and end-all of compilers. I’m hopeful that Clang / LLVM become more the standard. IMHO the Clang / LLVM mode of adoption is the better one, with it gradually moving into new uses and areas from smaller to larger as folks get experience with it and as it improved. Now it’s a built-in in many OSs and some are even being built using them. No complaints from me about that.

    That zsh is an extension of ksh doesn’t make it any less odd. It simply is NOT in the list of most often used / most familiar shells. Heck, even ksh isn’t. (Yes, I’ve used ksh). Yet Another case of using something New Different and a PITA to everyone else… Fine for personal use, or even limited tools inside an early adopter group ( I’m all for early adopters as I often am one); but it causes a threshold problem for adoption in a larger group.

    Claiming Chrome requires Python / Pip in a similar pattern just says it is a broken paradigm too. Yes, I’m not a fan of Python. It’s an OK language I guess… but just the whitespace indentation means many “copy paste” events change what your program does and makes it a PITA to post examples in any forum that does white space changing / stripping. A {} or Begin End; set is not that big a burden…

    while not having inherent garbage collection so released memory just stays used unless you think about it and clear it.

    You’ve got it backwards. Rust’s memory management is LIGHTER than garbage-collection.

    Ok, so I’ll rephrase: “memory just stays used unless you think about it and destroy your objects intelligently”…

  17. E.M.Smith says:

    As I’m on the MacBook at the moment, and it has 4 browsers installed, I did a quick “how big is it?” in the Applications directory:

    bash-3.2# du -ms Firefox.app/ IceCat.app/ Safari.app/ Opera.app/
    186	Firefox.app/
    70	IceCat.app/
    35	Safari.app/
    130	Opera.app/
    

    As IceCat is basically a FireFox from an older code base ( I’m pretty sure it is pre-Rust) with the FFox images removed, it is a fair object for comparison. Safari is workable, but missing some of the newer whiz-bangs. Opera is fully functioned but I know little about how it is built.

    Clearly FireFox is The Big Fat One of the batch. IceCat is 37% the size and does the same job, and is based in FireFox codebase / feature set but an older pre-Rust code base IIRC.

    Just sayin’…

    UPDATE:

    Yeah, looks like based on FFox 3.0 and
    https://en.wikipedia.org/wiki/Firefox_3.0

    Written in C++, XUL, XBL, JavaScript

  18. Dude, have you tried building Chromium? 6 years ago that thing did not build on a 4Gb system, on x86. Not only that, it had two different bespoke build systems, each of them half-documented, with a bunch of non-functional infrastructure around it, and that’s before you get into ARM space that makes it even weirder. The amount of third-party libraries you need to compile before you build Chromium is astounding. I am surprised it’s packaged in Linux distros at all.

  19. Larry Ledwick says:

    Pale Moon is also a fork of firefox would be interesting to know how it ranks size wise on your system..

    On windows 7 (64 bit) shows in task manager:

    Palemoon (dot) exe using 201952 K in a single process (one tab open)
    brave browser has 15 processes spawned netting 862300 K of memory
    firefox with one tab open shows 5 processes running, with a total of 410072 K of memory allocated.

  20. E.M.Smith says:

    I have Pale Moon on my Tablet, but could not find a version for my old Mac. Maybe I’ll look again more fully…

    What is clear is that FireFox has become a fat pig with big memory demands just after the rewrite in Rust (from whatever details: rust memory management, coding style changes, feature creep, …)

    There’s a shortage of browser options on arm64 at the moment due to its being relatively new and most folks just going with v7 32 bit instruction as they work on v8 64 bit hw, and are debugged. Heck, I’m still running 32 bit v7 OS on one of my R.Pi boards; the v8 64 bit OS only becoming stable and reliable lately.

    The RockPro64 has both 32 bit and 64 bit Ubuntu on offer and it is very new HW. I want the v8 64 bit as that uses the hw float & NEON math. Perhaps a 32 bit compile of a browser on a 64 bit os via multilib…

    It really isn’t a big deal (yet…) as a “roll my own” OS from sources is more playing than need, so any schedule will do. It is just that I despise fans and have no desire to fire up some old PC just to cross compile a browser. I’d rather fire up a couple of hot SBCs and let it run over the weekend ;-)

    So it looks like what I need to do is investigate the older “freer” versions of FireFox pre-Rust and see if any of them run on any ARM, grab a copy, and run with that. There’s several to choose from so mostly I just need to pick one; preferably after finding one that’s small, efficient, and has an existing ARM port ;-)

    Worst case is next year on my birthday a surprise “gift” shows up of an Intel based SBC (no fan!) for about $120… that I’d rather spend on good coffee, but… ;-)

  21. E.M.Smith says:

    Just an FYI: In the Gentoo build that’s using F.Fox Developer edition, on the R.Pi M3, After just clicking around a few tabs (not many) it has sucked up ALL available memory and rolled 500 MB to swap. That’s 1.5 GB – 230 MB for the OS, from opening about 1/2 dozen tabs, some in the SAME tab so memory ought to have been released…

    It essentially makes the system unusable in about 3 minutes… The swap causes a system pause that you can’t click your way out of. Don’t know if that’s due to the OS (he something trick with compressed swap) or just the moving a few hundred MB to swap, but if feels like a system hang. Come back in a couple of minutes after swapping catches up and your can do your next thing… until THAT causes a swap.

  22. ssokolow says:

    that I found in the SPAM bin for no reason I could figure out.

    Possibly the combination of a long post and my use of a free spamgourmet.com forwarding e-mail that I can revoke if spammers ever compromise your site.

    the drawback is increased compile time and size of the resulting binaries.

    In Rust, it’s the programmer’s choice where and when to use monomorphization or polymorphization, just like in C++. Rust just gives the programmer an easier way to write monomorphic code because that plays well with the CPU cache and branch predictor.

    It’s essentially a language-level analogue to -O3 vs. -Os (optimize for speed vs. optimize for size). Monomorphic code plays more nicely with the CPU’s cache and branch predictor but makes for a larger binary. Polymorphic code requires more bouncing around in memory to resolve pointers, but needs fewer copies of the code. (The trade-off is conceptually similar to the loop-unrolling optimizer pass.)

    If you want to research it, Rust’s term for polymorphic dispatch is “trait objects”.

    Sure looks like an OO language implementation to me…

    Object Oriented Programming refers to a specific combination of features (which hasn’t delivered on its hype and plays poorly with the CPU cache and branch predictor due to all the indirection and dynamic dispatch).

    Rust’s approach has enough similarities that people informally talk about “objects”, but it lacks the aspects which make it difficult to write efficient code when using OOP.

    Most notably, Rust has no classical inheritance, which is what introduces most of the bloat and complexity via the dynamic dispatch requirements. In Java terms, Rust’s traits are interfaces and it has no class hierarchies.

    What people call “objects” in Rust are not officially considered objects. They’re declared using the struct keyword and are basically just C structs with some extra syntactic and compiler sugar so you can say my_struct.foo() rather than thing_struct_foo(my_struct) and don’t have to reinvent dynamic dispatch using function pointers (like GTK+ did) if your application really does need dynamic dispatch.

    In fact, within Rust itself, my_struct.foo() is just sugar for ThingStruct::foo(my_struct), which, from the machine code’s perspective, is no different from a function named thingstruct_foo(my_struct). People just call them objects in casual vernacular to distinguish them from the minimally featureful structs you see in languages like C.

    The lack of classical inheritance also allowed the Rust developers to come up with a more efficient design for the fat pointers that are used when you do ask for polymorphic dispatch.

    Rust supports interface inheritance, but replaces implementation inheritance with composition

    This just means that:

    1. Traits can inherit from each other, but that doesn’t impose any runtime burden because it’s just a way to tell the compiler that it’s a compile-time error for a struct to implement interface A without also implementing interface B.

    2. For code inheritance, Rust takes the same approach as C.

    If you want a Professor object that inherits from a Person object, you make a Professor struct that has a field of type struct Person and your professor_frobnicate(my_professor) function calls person_frobnicate(my_professor.person).

    (Admittedly, the boilerplate this involves, either manually or via macros, is one of the remaining pain points that they’re still actively exploring solutions for, since they don’t want to resort to just reinventing classical inheritance’s flaws.)

    Making “increased compile time and size of resulting binaries” is not looking light and slim.

    Metaprogramming does that whether you’re coding in Rust or C++. It’s up to the programmers to use it responsibily, whether you’re using hygienic macros in Rust or template metaprogramming in C++.

    Both the C++ and Rust ecosystems have code size profilers to help solve this. (cargo-bloat for Rust and Google’s Bloaty McBloatface for C++)

    Now the question is just when does FireFox bother to do “object destruction”? Clearly not soon enough to prevent enormous memory demands if one page of a newspaper can suck up 400 MB and swapping around a few (under 10) tabs can peg a 2 GB memory machine with swaps.

    When there are no more references to the object.

    Browsers can’t magically compensate for sloppy JavaScript programmers who write sites and browser extensions that assume they have the whole PC to themselves.

    The alternative is for the page’s JavaScript to not work at all. The equivalent would be constraining LibreOffice’s memory allocation and then being surprised when, instead of magically getting more efficient, it either errors out or segfaults after a call to malloc() fails.

    (Except that since so much of the web is done declaratively in HTML and CSS, you can use NoScript or uMatrix to get a middle-ground where you use the site’s sloppy HTML and CSS without also running its sloppy JavaScript.)

    Bad coders write bad code no matter where you look. I just helped my brother set up a new PC a couple of days ago and, if we ever have to reinstall it, I’m going to go root around online for separate driver installers. The installer on the motherboard disc was slower, more bloated, and more incompetently coded than any web page you ever saw.

    then the necessary conclusion is that it’s the rest of the package that’s where things are screwed up. And yes, saying you need 4 GB to 8 GB of memory to build a package is screwed up.

    One of the commenters over on Reddit made an interesting observation on that point. Apparently it’s usually the linking phase (ie. GNU ld) that imposes the huge memory requirements in these cases… which would make sense, given that it’s the part that has to look at the whole program, C, C++, and Rust combined.

    Similarly claiming Chrome is worse to build is unconvincing. I’d only thought it might be worth a look due to it being already IN many ARM based distros. If it’s a bigger pig too, then we’ve got two pigs, not one pig and a swan.

    Fair enough. I just didn’t want you singling out Firefox and Rust for a problem that applies to browsers in general.

    I’ll need to read up on RAII before I comment further on it. IF in fact it implements a better alternative to garbage collection and makes memory management better;

    The basic idea is that you tie the lifetime of every resource (heap allocation, open file handle, network socket, etc.) to a stack variable and, wherever a stack variable goes out of scope without handing its contents off to another variable, the compiler inserts a call to the appropriate cleanup code for the variable’s type. (C++ uses class destructors for this. Rust lets you implement an interface called Drop on your structs to define custom behaviour like “call fclose“.)

    Per self hosting being common in compilers: Yes, I know. Doesn’t mean you ought to require it for your browser to compile. IMHO there ought to be a separation between systems tools builds and end user applications builds.

    I’m not sure what saying here. Did Mozilla vendor a copy of the Rust compiler while I wasn’t paying attention? (If not, then, as phrased, that seems to apply equally well as an argument against other self-hosted languages like C.)

    If they did vendor a copy of rustc rather than supporting use of the system copy of rustc, then I agree that what they did is wrong.

    Often new languages spend a while written in an old language just to get past this stage. Jumping over that just to be self-hosted is not a bright move.

    Rust was written in Ocaml for somewhere around half a decade before it became self-hosted. The thing that makes Rust odd is that it took close to a decade for the language’s design to stabilize, so the compiler became self-hosted on a normal schedule, but the language’s design continued to churn long beyond that.

    To assert it’s ‘an operating system’ in it’s own right complete with its own languages and need for bootstrap compilation and more: that is just asserting the same things I’m complaining about – it’s a too fat pig that takes too much resources and is not designed as efficiently as desired.

    Firefox doesn’t need bootstrap compilation (though, as I mentioned, Chrome’s build system used to), but, aside from that, I agree that the web has become a pig. There’s a reason I neither use nor write desktop applications using browser-based application frameworks like Electron. It’s bad enough that the browser is so much heavier than everything else.

    That zsh is an extension of ksh doesn’t make it any less odd. It simply is NOT in the list of most often used / most familiar shells. Heck, even ksh isn’t. (Yes, I’ve used ksh). Yet Another case of using something New Different and a PITA to everyone else… Fine for personal use, or even limited tools inside an early adopter group ( I’m all for early adopters as I often am one); but it causes a threshold problem for adoption in a larger group.

    Fair enough. I probably would have expressed the same sentiment by saying “they made the odd choice to depend on zsh” or “they made the inconsiderate decision to depend on zsh instead of just bash”.

    Ok, so I’ll rephrase: “memory just stays used unless you think about it and destroy your objects intelligently”…

    Even that’s a bit of an understatement. The Rust compiler enforces that all memory allocations done using the safe (default) subset of the language must have a single owner. (The unsafe extensions are used for calling C code or writing things the compiler can’t automatically verify the memory-safety of, like the standard library’s reference-counted pointer implementation.)

    That makes it much less likely that you’ll hold onto things longer than you need them, because the most common way to leak memory in a garbage-collected language is to forget that you’re holding an extra reference to it somewhere.

    In Rust, shared ownership is opt-in (by default, the compiler will error out if you accidentally try to keep two references to something active at the same time) and means increased verbosity, so people tend to use other patterns unless it’s actually necessary.

    If anything, what I tend to see is people who asked for optimization advice being told to extend the lifetimes of things. For example, clearing and reusing String variables to avoid the overhead of repeatedly allocating and then freeing them in a hot loop.

  23. E.M.Smith says:

    I’m not sure what saying here. Did Mozilla vendor a copy of the Rust compiler while I wasn’t paying attention? (If not, then, as phrased, that seems to apply equally well as an argument against other self-hosted languages like C.)

    I was looking at the requirements to build FireFox. The build directions I was looking at (cited way up somewhere…) said it packaged with with Rust and LLVM. It is possible I mis-read it, or that the guy claiming it was unclear in the writing. The bottom line is that Rust and LLVM are NOT part of many standard Linux builds. So to get FireFox in any case (built from source) you need to get and build them, too.

    This all came about from my exploring a Gentoo install / build and Gentoo is a source distribution so by definition you get to build your own from sources – which it looks is impossible for Rust on a 1 GB and maybe 2 GB machine. Then, being written in Rust, one must get a working Rust compiler in order to get a working Rust compiler… As EVERY system has C on it (and usually C++) you would get wider language acceptance in the source build community with some kind of bootstrap compiler in C.

    I’m an “old guy” and at least during my time doing this stuff decades back, it was VERY common to have a bootstrap compiler (usually simplified and with no optimization) to do the first compilation and then use the product to do your final compile of the self hosted code. Some of them were even written in assembly.

    The bottom line is that this ends up with 2 “will not build” hurdles to cross. Yes, I know I could cross them by booting up the “Whirring Monster” in the corner and doing a full on OS install / chroot environment etc. etc. just to get a compile to complete, but frankly that’s just more “bother” than I’m interested in swallowing. Easier to just go find a pre-Rust clone fork and use / build it.

    Left dangling is the question of: If not Rust, just why is FireFox so much larger than the other browsers and just why does it seem to suck up more memory and swap more in use on small machines? You assert it isn’t Rust, but there isn’t much else to explain it. OTOH, something not in evidence is hard to see, so it could well be some unknown unknown.

    In all cases, it doesn’t matter to me. What matters is that FireFox is a big No-Go for the target build unless I snag a binary copy – which isn’t quite in line with the goal of building a source base system… It is also perilously close to not working at all on several of my machines (one being a MacBook Air with 2 GB memory. It ends up swap bound after a few dozen tabs are looked at, and sometimes as low as 1/2 dozen – so I quit FFox and reload it… and 2 GB of swap get released…) This is NOT a low end SBC (like most of my toy systems…) And no, I have zero interest in putting a Management Engine Meltdown / Specter etc.etc. Intel PC back into service just to run FireFox…

    There is something wrong in FireFox in how much memory it sucks up, that is not wrong in other browsers. (IceCat, Opera, & Safari do not have this problem on the Mac, for example). That’s just the facts.

  24. jim2 says:

    I certainly concur that a browser can’t fix bad JavaScript. I don’t use it regularly, but having studied it quite a bit, I consider it an abomination to mankind. That said, as CIO points out, other browsers can render the same web sites without killing the OS. So, maybe Rust isn’t as pristine at memory management as stated by some above. Here is a possible clue …

    https://gankro.github.io/blah/linear-rust/

  25. Ronnie says:

    @jim2 – that’s great positive website on linear types. It suggests (and in borne out in practice) that you don’t waste resources at all; no unreclaimed memory, no dangling references etc. Typically in Rust you will get the minimal subset of code and memory you possibly could have written had you written it in raw C. Which is generally suggested in the benchmarks and binary size comparison.
    Though the binaries are static (unlike C which links to libc and thus hides it’s true size), so you end up with a bigger file on disk though in reality it’s actually using less resources.

  26. jim2 says:

    Maybe, Ronnie, you didn’t read the entire article? From the link:

    Actually there’s one escape hatch: mem::forget(). mem::forget will take any value and prevent it from being Dropped. It can be thought of as The All User of values, that any must-use value can be used by. Mostly it’s just used by unsafe code to take ownership of the data owned by a Drop type (see: Vec::into_raw_parts).

    Also there’s some several ways for destructors to not run (often called “destructor leaking”):

    building a reference counted cycle which will leak into infinity
    overwriting the value with ptr::write/copy
    aborting the program
    never leaving the scope the value is defined for

    Drop also has several annoying limitations:

    It can’t produce a value
    It can’t take extra values in
    There Is Only One Drop

    This means that Drop can’t be used to ensure step2 gets called, nor can it be used if step3(takes, other, values). It also means it can’t be used to ensure one of step3a or step3b is called.

  27. jim2 says:

    Perhaps the OS is cleaning up after Rust or sloppy Rust programmers? Perhaps the ones CIO is using expects the program to behave? Don’t know, just speculating.

  28. E.M.Smith says:

    Looking at that link I’m again feeling that Oh God Yet Another Language To Save Us feeling. The comment about “where the Rust Evangelism Task Force hangs out” reminded me of all the various advocates for Language As Protector have pushed Yet Another Language at us (me). Pascal was the first Straight Jacket language I learned. Then there was Ada – supposed to be The Best Thing and demanded by the military for their projects. And on and on.

    The OO came along as The Great Savior, promising faster code development and re-usability of code as The New Features (along with some “safety” enhancements). Same story used for FORTRAN and ALGOL over assembly, and C or Pascal or Ada over FORTRAN & ALGOL; and Focus / Ramis 4th Generation over C, Pascal, PL/1, Ada… and… (work in sidebars on Perl, Python, Ruby, Ruby on Rails, and a long wander down LISP, Haskell, etc. etc.)

    So now I’m seeing the same movie I’ve seen a half dozen times before: Yet Another Language to save us from ourselves… And the resultant product is a fat pig that consumes machines like breakfast cereal.

    Look, I know I’m being an old surly curmudgeon here, but I’ve lived the history of computing from pretty much start to end. (I missed out on vacuum tube computers but not by much, I was doing radio in the vacuum tube era as DIY computers were out of my budget). After a while you notice the same thing over and over.

    A Language will never save you from yourself. Never ever.
    It just changes the kinds of errors you can make.

    The more stuff you put in the language and the more stuff the compiler must do, the bigger, fatter, slower, and crummier the programs produced in it. I do hold out hope this is not a hard truth and only a consistent trend and someday someone will find a way out of it. So far that’s not the case. Especially for Straight Jacket languages with lots of type checking and enforced behaviours; all that comes via cycles and bytes… There’s a reason Unix is still written in C.

    Sidebar on C: It isn’t my favorite language. I’m OK in it, but it rankles on how it handles I/O (needing to jump through hoops to read a fixed format flat file and with I/O as function calls when it’s easy to just make it part of the core language…). I just admire that it IS incredibly efficient and flexible. Almost as efficient as assembler, yet you can do complex things if desired.

    Reading that link on Rust linear types, I cringed at how much programmer thought time must go into selection and use of special types designed to protect you from yourself. Yes, I’m new to the language, but really… It’s a mix of navel gazing and playing with yourself when you spend more time fussing over the Special Features of the language than goes into just writing fast, tight, clean code.

    I come by my biases honestly. I’ve literally lost track of how many computer languages I’ve used at one time or another. (Even OO – I managed a project that produced a significant product using OO methods). As a contractor I’d often be tossed into a site using some language I’d not seen before and need to pick it up quick and then fix what was broken. (Most fun was diddling the inode reading code on a Cray running Unicos to let it account for nearline storage in a tape robot ;-) while Least fun was digging around in PL/1 and COBOL to find where someone had broken calls to the Focus Database). So it’s not like I don’t have a varied background or am stuck on My One True Language. In fact, my major bias is just realizing that the Language Zealots are all pretty much of too narrow an exposure to see that most languages are OK to pretty good and that no language is Truly Great! for everything. It’s all tradeoffs and dealing with the mistakes made in the language design / limits by workarounds.

    To the extent I have a “hot button” it is efficiency. At one time I was a Senior Consultant on a database product. My specialty was efficiency reviews. I went in to the State Of California Architects office where they had maxed out their mainframe. A few $Million for another one… A week later I walked out and the machine was 4% utilized doing the same job. ALL because folks wrote inefficient sloppy code. (Sorting before selecting, for example. Just select first and sort the subset. Yes, it’s simple and easy. BUT if you are not thinking about it and just writing something that works…)

    So now I look at a product who’s function is basically:
    Present text and pictures in a window.
    Run videos.

    And that, it is claimed, needs more GB to compile than most machines had up to about y2k (and then some) and can not present the pretty pictures, text, and a couple of videos from a 1/2 dozen to dozen pages without swapping on a 2 GB computer? Something insane and very wasteful of resources is going on to require that. Really.

    I was manager at a site at Apple where we brought up the first Gbit network and frame buffer for display of full motion animation. We used a Cray XMP-48. So I know what work it is to create full motion high res video. But my Raspberry Pi M3 has more compute power than that computer had, and that Cray had 64 MB – yes MEGabytes of memory… so telling me you need 2 GIGabytes just tells me you are doing something wrong by a few orders of magnitude… It takes real talent to waste that many computes and bytes…

    Too many layers all written by folks repeating the mantra “memory is cheap” and “CPUs are fast” and not enough running profilers, looking to improve efficiency, finding the best way to DO the job instead of the fastest way to be done writing code.

    My Brother-in-Law (Ph.D. Aeronautics) worked at NASA doing aeronautics software stuff. He had an interesting chart. It showed improvements in computes from Moore’s Law and from Improved Algorithms. The Algorithm line rose faster….

    This means that thinking about what to do and how to do it better, faster, more efficiently matters more than Moore’s Law, and all those folks saying we do NOT need to think about efficiency due to Moore’s Law are just WRONG. Essentially BAD programming can consume new computes faster than Moore’s Law can deliver them. (We see proof of this in the Microsoft Windows Desktop that is doing essentially the same job it did in 1990’s at sometimes slower speeds to results; after several orders of magnitude increase in hardware computes…)

    What I see in FireFox is the same effect.

    You can blame it on Java / Java Script and VM indirection or “whatever”, but it is there; and it got worse in the move from the last C/C++ version of the product to Rust. Now is that ALL static linking? I don’t think so… (But even there, we now link in huge libraries to use one small corner of them…)

    Moore’s Law is ending now. We’ve already needed to resort to ever more multi-cores to continue making faster machines. We’re at the top of the S of the logistics curve and nm widths are reaching a point where “side effects” of small scale / quantum physics are making them a limit. So eventually folks will need to go back and look at the last 3 decades of lousy inefficient coding and start fixing it. For me, I’m just not going further down the Bloat Beltway To Nowhere…

  29. Larry Ledwick says:

    I see that at work, our proprietary code is fairly mature now and the cost of the machines is high enough that it is a lot cheaper to dig into the code and fix poor execution strategies than it is to buy, install and configure more new hardware. We are now dealing with db tables with several billion rows. Crappy sql queries on tables which are not properly indexed or created in sorted order can run for days, when the issues are figured out, the same results returned in minutes or hours.

    We have a couple of the developers who specialize in figuring out exactly where the system is spending its time during a job run, and then finding ways to clean up the time hogs.

  30. ssokolow says:

    @E.M. Smith:

    The bottom line is that Rust and LLVM are NOT part of many standard Linux builds. So to get FireFox in any case (built from source) you need to get and build them, too.

    Fair enough. Mozilla’s rationale for incorporating Rust into Firefox so quickly is similar to their decision to make the transition to WebExtensions so quickly. They spent so long trying to save their old approaches that Firefox is down near 10% market share, Chrome is up around 60%, and Chrome-based browsers eat up most of the remaining 30%.

    They felt that it was a rock-and-a-hard-place situation with drastic action needed to reallocate their scarce developer resources and forcing the switch to WebExtensions was like cutting off a gangrenous limb. The whole point of using Rust is that it moves much of the safety checking that is so critical for a browser from the programmer to compile-time checks.

    As EVERY system has C on it (and usually C++) you would get wider language acceptance in the source build community with some kind of bootstrap compiler in C.

    They haven’t developed an official policy for keeping it up to date with the language spec yet, so, at the moment, using it to bootstrap takes enough intermediate steps that it’s more for verifying the lack of Trusting Trust attacks, but such a compiler does exist. It’s called mrustc and it’s written in C++.

    If not Rust, just why is FireFox so much larger than the other browsers and just why does it seem to suck up more memory and swap more in use on small machines? You assert it isn’t Rust, but there isn’t much else to explain it. OTOH, something not in evidence is hard to see, so it could well be some unknown unknown.

    Firefox descends from a rewrite of Netscape Communicator that never got released because it bought into all the hype of early 2000s buzzwords like Object-Oriented and XML and was an infamously bloated pig as a result. (Source: Blog posts by JWZ). The XUL-based UI which Firefox is only now preparing to phase out is an example of that. Ever since the Mozilla Suite days, their ability to refactor the browser internals has been seriously crippled because the extension API allowed extensions to depend on internal implementation details willy-nilly. The WebExtensions push and the rewrite in Rust are part of the effort to finally fix that.

    (Dropping support for non-WebExtensions is allowing them to finally rework browser internals without breaking extensions and Rust is intended as a way to mitigate the risk of introducing bugs when doing major rewrites.)

    Chrome, on the other hand, is built on a fork of Apple’s WebKit, which is adapted from the KHTML engine from KDE’s Konqueror. As I understand it, that started out as a cleaner codebase. (As is typical when a project is open-source from the beginning rather than starting as company-internal code.) Firefox also reinvents a lot of C++ constructs that modern compilers provide, but didn’t exist or didn’t work reliably at the time the code was written.

    Now as to why it got heavier around the same time Rust became mandatory, they made a lot of changes around then. Rust was just the one that was most visible from the perspective of building from source.

    You may not be aware of this, but the original and Rust versions of the code in question co-existed for months before Rust became mandatory so that they could do comparative testing and make sure the Rust code wouldn’t cause any regressions (performance or memory usage included).

    There is something wrong in FireFox in how much memory it sucks up, that is not wrong in other browsers. (IceCat, Opera, & Safari do not have this problem on the Mac, for example). That’s just the facts.

    I certainly agree that Firefox has issues. (Though i’ve found Chrome to be comparable or worse when doing apples-to-apples comparisons to try to save RAM.)

    I haven’t looked into how IceCat has diverged from upstream Firefox, but Opera and Safari don’t have Firefox’s XUL+JavaScript GUI and they are built on Blink and WebKit, respectively (which, as I mentioned, aren’t full of old Netscape code still waiting to be refactored away).

    @jim2:

    Maybe, Ronnie, you didn’t read the entire article? From the link:

    I have no idea how you think that post is of any relevance. It’s talking about how Rust falls short of a standard that is only met by certain research languages and every downside that’s pointed out is an example of a case where Rust matches C++ rather than exceeding it.

    (mem::forget() just enables the same semantics you get by allocating memory using new in C++. Building a reference-counted cycle in C++ using std::shared_ptr will also leak into infinity. etc. etc. etc.)

    @E.M. Smith:

    Looking at that link I’m again feeling that Oh God Yet Another Language To Save Us feeling. The comment about “where the Rust Evangelism Task Force hangs out” reminded me of all the various advocates for Language As Protector have pushed Yet Another Language at us (me).

    I’m not sure which comment you’re referring to, but I need to point out that “Rust Evangelism Task Force” is a joke name the people in /r/rust/ gave to the fanboys who engage in unwanted and unhelpful “Why don’t you rewrite this in Rust?” behaviour who more reasonable members of the community then have to clean up after.

    As for “Language As Protector”, I’m not going to fanboy it up, arguing that you’re wrong, but I will say that Rust is justifiably seeing gains in two niches:

    1. Big projects where no one programmer can keep everything in their head and it’s easy for security and data corruption bugs to sneak in when one person changes an API without someone else noticing or a new programmer misunderstands code written by someone who’s left the company. (A web browser is a perfect example of this.)

    2. Providing a faster, more lightweight alternative to people who, until now, have turned to languages like Python and JavaScript.

  31. jim2 says:

    My point wasn’t to compare Rust to C++. Just pointing out it can leak memory. That’s all.

    That said, I appreciate your input on this. Rust looks interesting.

    Can you explain CIO’s experience with Firefox on smal ARM systems vs more full-blown hardware?

  32. E.M.Smith says:

    @Jim2:

    I have essentially the same complaints and things have similar size issues on non-ARM and large systems. The only real difference is large systems have the 4 to 8 GB of memory the developers are now expecting to be everywhere (but isn’t…)

    The difference of experiences is just a result of having a system with 512 MB to 2 GB of memory. A PC with 1 GB will have the same issues.

Comments are closed.