• 0 Posts
  • 5 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle
  • When I plug my phone into the wall, there are chips in the wall charger and on both sides of the cable, because the simple act of charging requires a handshake and an exchange of information notifying the charger, the cable, and the phone what charging modes are supported, and how to ask for more or less power.

    Seriously? Am I the only one thinking this could be done with less than 10 chips at most?

    How many chips are in a fully configured desktop computer? There’s like dozens of any given motherboard, controlling all the little I/O requirements. Each module of RAM is several chips. If you use external cards, each card will have a few chips, too. Meanwhile, the keyboard and the mouse each have a few chips, and the display/monitor has a bunch more.

    I’d be surprised if the typical computer had less than 100 chips.

    Now let’s look at the car functions. A turn signal that blinks, oscillating between on and off? That’s probably a chip. A windshield wiper that can do intermittent wiping at different speeds? Another chip or more. Variable valve timing that’s electronically controlled? Another few chips. Each sensor that detects something, from fuel tank status to engine knocking to air/fuel mixture? Probably another chip. Controllers that combine all this information to determine how to mix the fuel and air, whether to trigger a warning light on the dash, etc.? Probably more chips. What about deployment of airbags, or triggering of the anti-lock braking systems? Cruise control requires a few more chips, as speedometers and odometers are not electronic rather than the old analog systems. Smart cruise control and lane detection has even more chips. Hybrid drivetrains that charge or discharge batteries need dozens of chips controlling the flow of power (and the logic of when power should flow in which direction).

    By the time Toyota was in the news in 2011 for potential throttle sticking problems that killed people, it was typical for even economy cars to have something like 30 ECUs controlling different things, with each ECU and its associated sensors requiring multiple chips.

    Some modern perks require even more chips. Automatic lights? High beam dimming? Automatic wipers? Remote start or shutting off the engine at idle?

    And that’s just for driving. FM tuner? Chips. AM tuner? More chips. Bluetooth and Carplay/Android Auto? More chips. Rear view camera, now mandated on all cars? More chips. A built-in GPS or infotainment system? A full blown computer.

    All the little analog controllers that were present in cars in the 80’s are now more efficiently performed on integrated circuits, including analog circuits. Each function will require its own chip. If you’re trying to recreate the exact functionality of a typical car from the 1990’s, you’d probably still need a minimum of a few hundred chips to pull it off. And it’s probably smart to segment things so that each module does one thing in a specialized way, isolated from the others, lest an unexpected input on the radio mess up the spark plug timing.

    The world is run by chips, and splitting up the functions into multiple computers/controllers, with multiple chips each, is just the easier and more efficient way to do things.


  • Tags interfere with human readability. Open any markdown file with a text editor in plain text and you can basically read the whole thing as it was intended to be read, with possibly the exception of tables.

    There’s a time and a place for different things, but I like markdown for human readable source text. HTML might be standardized enough that you can do a lot more with it, but the source file itself generally isn’t as readable.


  • That’s why I think the history of the U.S. phone system is so important. AT&T had to be dragged into interoperability by government regulation nearly every step of the way, but ended up needing to invent and publish the technical standards that made federation/interoperability possible, after government agencies started mandating them. The technical infeasibility of opening up a proprietary network has been overcome before, with much more complexity at the lower OSI layers, including defining new open standards regarding the physical layer of actual copper lines and switches.


  • the only option for top performance will be a SoC

    System in a Package (SiP) at least. Might not be efficient to etch the logic and that much memory onto the same silicon die, as the latest and greatest TSMC node will likely be much more expensive per square mm than the cutting edge memory production node from Samsung or whatever foundry where the memory is being made.

    But with advanced packaging going the way it’s been over the last decade or so, it’s going to be hard to compete with the latency/throughout of an in-package interposer. You can only do so much with the vias/pathways on a printed circuit board.


  • I’d argue that telephones are the original federated service. There were fits and starts to getting the proprietary Bell/AT&T network to play nice with devices or lines not operated by them, but the initial system for long distance calling over the North American Numbering Plan made it possible for an AT&T customer to dial non-AT&T customers by the early 1950’s, and set the groundwork for the technical feasibility of the breakup of the AT&T/Bell monopoly.

    We didn’t call it spam then, but unsolicited phone calls have always been a problem.