What is the most reasonable way for non-binary computers to have become standard?











up vote
33
down vote

favorite
10












Let us assume planet Earth, with a history similar to ours. Except, the result of the computer revolution is not a computer system based on binary (i.e. 0 and 1), but some other system. This system could be digital, with more than two digits, or otherwise.



Transistors were invented in this alternate timeline, in the 1950s. Any other technology that was invented can be shaped to favor a non-binary computing system.



What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?










share|improve this question


















  • 11




    Another problem with "non-binary becoming the standard" is that binary electronics are a lot faster, because you only need two voltage states, and the circuitry required for that is stupendously simple -- and thus can be made *very fast -- compared to multi-voltage systems. It's why binary became dominant.
    – RonJohn
    Nov 26 at 15:00






  • 7




    Binary became dominant because transistors, which are instrinsically two-state devices, were invented. There was no point developing 10-state semiconductor devices, because 10,000 transistors is already more efficient (in almost every way) than a 10-state thermionic device.
    – OrangeDog
    Nov 26 at 15:25






  • 2




    The earliest computers used decimal. This became limiting as they became faster. Basically it's because it's faster to switch (and measure) on versus off, than it is to switch to (and mesaure) one of 10 possible voltages.
    – Aaron F
    Nov 26 at 16:20






  • 5




    We don't know of any technology which would allow a non-binary discrete computer to be more efficient than its binary equivalent. If we did, non-binary would quickly be adopted. What you need is a universe with physics that make an efficient three-state device possible - where you naturally get three states and would need to waste one of those states, at an efficiency cost, to produce a binary system. This is opposite to the condition now where we have efficient two-state devices and need to invent some way to represent three states at a higher level to produce a non-binary system.
    – J...
    Nov 26 at 17:03






  • 5




    Just have one idiot who worked on early computers pick that, and the rest should follow. We do a lot of dumb things just because it's the "standard", in particular weights and measures (rotation has 360 degrees, time has 60 minutes, etc, due to ancient Sumeria's weird number system). As soon as enough people learn a system it becomes very hard to change, regardless of other advantages.
    – Bert Haddad
    Nov 27 at 23:37















up vote
33
down vote

favorite
10












Let us assume planet Earth, with a history similar to ours. Except, the result of the computer revolution is not a computer system based on binary (i.e. 0 and 1), but some other system. This system could be digital, with more than two digits, or otherwise.



Transistors were invented in this alternate timeline, in the 1950s. Any other technology that was invented can be shaped to favor a non-binary computing system.



What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?










share|improve this question


















  • 11




    Another problem with "non-binary becoming the standard" is that binary electronics are a lot faster, because you only need two voltage states, and the circuitry required for that is stupendously simple -- and thus can be made *very fast -- compared to multi-voltage systems. It's why binary became dominant.
    – RonJohn
    Nov 26 at 15:00






  • 7




    Binary became dominant because transistors, which are instrinsically two-state devices, were invented. There was no point developing 10-state semiconductor devices, because 10,000 transistors is already more efficient (in almost every way) than a 10-state thermionic device.
    – OrangeDog
    Nov 26 at 15:25






  • 2




    The earliest computers used decimal. This became limiting as they became faster. Basically it's because it's faster to switch (and measure) on versus off, than it is to switch to (and mesaure) one of 10 possible voltages.
    – Aaron F
    Nov 26 at 16:20






  • 5




    We don't know of any technology which would allow a non-binary discrete computer to be more efficient than its binary equivalent. If we did, non-binary would quickly be adopted. What you need is a universe with physics that make an efficient three-state device possible - where you naturally get three states and would need to waste one of those states, at an efficiency cost, to produce a binary system. This is opposite to the condition now where we have efficient two-state devices and need to invent some way to represent three states at a higher level to produce a non-binary system.
    – J...
    Nov 26 at 17:03






  • 5




    Just have one idiot who worked on early computers pick that, and the rest should follow. We do a lot of dumb things just because it's the "standard", in particular weights and measures (rotation has 360 degrees, time has 60 minutes, etc, due to ancient Sumeria's weird number system). As soon as enough people learn a system it becomes very hard to change, regardless of other advantages.
    – Bert Haddad
    Nov 27 at 23:37













up vote
33
down vote

favorite
10









up vote
33
down vote

favorite
10






10





Let us assume planet Earth, with a history similar to ours. Except, the result of the computer revolution is not a computer system based on binary (i.e. 0 and 1), but some other system. This system could be digital, with more than two digits, or otherwise.



Transistors were invented in this alternate timeline, in the 1950s. Any other technology that was invented can be shaped to favor a non-binary computing system.



What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?










share|improve this question













Let us assume planet Earth, with a history similar to ours. Except, the result of the computer revolution is not a computer system based on binary (i.e. 0 and 1), but some other system. This system could be digital, with more than two digits, or otherwise.



Transistors were invented in this alternate timeline, in the 1950s. Any other technology that was invented can be shaped to favor a non-binary computing system.



What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?







technology alternate-history computers






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 26 at 13:22









kingledion

72.6k26244428




72.6k26244428








  • 11




    Another problem with "non-binary becoming the standard" is that binary electronics are a lot faster, because you only need two voltage states, and the circuitry required for that is stupendously simple -- and thus can be made *very fast -- compared to multi-voltage systems. It's why binary became dominant.
    – RonJohn
    Nov 26 at 15:00






  • 7




    Binary became dominant because transistors, which are instrinsically two-state devices, were invented. There was no point developing 10-state semiconductor devices, because 10,000 transistors is already more efficient (in almost every way) than a 10-state thermionic device.
    – OrangeDog
    Nov 26 at 15:25






  • 2




    The earliest computers used decimal. This became limiting as they became faster. Basically it's because it's faster to switch (and measure) on versus off, than it is to switch to (and mesaure) one of 10 possible voltages.
    – Aaron F
    Nov 26 at 16:20






  • 5




    We don't know of any technology which would allow a non-binary discrete computer to be more efficient than its binary equivalent. If we did, non-binary would quickly be adopted. What you need is a universe with physics that make an efficient three-state device possible - where you naturally get three states and would need to waste one of those states, at an efficiency cost, to produce a binary system. This is opposite to the condition now where we have efficient two-state devices and need to invent some way to represent three states at a higher level to produce a non-binary system.
    – J...
    Nov 26 at 17:03






  • 5




    Just have one idiot who worked on early computers pick that, and the rest should follow. We do a lot of dumb things just because it's the "standard", in particular weights and measures (rotation has 360 degrees, time has 60 minutes, etc, due to ancient Sumeria's weird number system). As soon as enough people learn a system it becomes very hard to change, regardless of other advantages.
    – Bert Haddad
    Nov 27 at 23:37














  • 11




    Another problem with "non-binary becoming the standard" is that binary electronics are a lot faster, because you only need two voltage states, and the circuitry required for that is stupendously simple -- and thus can be made *very fast -- compared to multi-voltage systems. It's why binary became dominant.
    – RonJohn
    Nov 26 at 15:00






  • 7




    Binary became dominant because transistors, which are instrinsically two-state devices, were invented. There was no point developing 10-state semiconductor devices, because 10,000 transistors is already more efficient (in almost every way) than a 10-state thermionic device.
    – OrangeDog
    Nov 26 at 15:25






  • 2




    The earliest computers used decimal. This became limiting as they became faster. Basically it's because it's faster to switch (and measure) on versus off, than it is to switch to (and mesaure) one of 10 possible voltages.
    – Aaron F
    Nov 26 at 16:20






  • 5




    We don't know of any technology which would allow a non-binary discrete computer to be more efficient than its binary equivalent. If we did, non-binary would quickly be adopted. What you need is a universe with physics that make an efficient three-state device possible - where you naturally get three states and would need to waste one of those states, at an efficiency cost, to produce a binary system. This is opposite to the condition now where we have efficient two-state devices and need to invent some way to represent three states at a higher level to produce a non-binary system.
    – J...
    Nov 26 at 17:03






  • 5




    Just have one idiot who worked on early computers pick that, and the rest should follow. We do a lot of dumb things just because it's the "standard", in particular weights and measures (rotation has 360 degrees, time has 60 minutes, etc, due to ancient Sumeria's weird number system). As soon as enough people learn a system it becomes very hard to change, regardless of other advantages.
    – Bert Haddad
    Nov 27 at 23:37








11




11




Another problem with "non-binary becoming the standard" is that binary electronics are a lot faster, because you only need two voltage states, and the circuitry required for that is stupendously simple -- and thus can be made *very fast -- compared to multi-voltage systems. It's why binary became dominant.
– RonJohn
Nov 26 at 15:00




Another problem with "non-binary becoming the standard" is that binary electronics are a lot faster, because you only need two voltage states, and the circuitry required for that is stupendously simple -- and thus can be made *very fast -- compared to multi-voltage systems. It's why binary became dominant.
– RonJohn
Nov 26 at 15:00




7




7




Binary became dominant because transistors, which are instrinsically two-state devices, were invented. There was no point developing 10-state semiconductor devices, because 10,000 transistors is already more efficient (in almost every way) than a 10-state thermionic device.
– OrangeDog
Nov 26 at 15:25




Binary became dominant because transistors, which are instrinsically two-state devices, were invented. There was no point developing 10-state semiconductor devices, because 10,000 transistors is already more efficient (in almost every way) than a 10-state thermionic device.
– OrangeDog
Nov 26 at 15:25




2




2




The earliest computers used decimal. This became limiting as they became faster. Basically it's because it's faster to switch (and measure) on versus off, than it is to switch to (and mesaure) one of 10 possible voltages.
– Aaron F
Nov 26 at 16:20




The earliest computers used decimal. This became limiting as they became faster. Basically it's because it's faster to switch (and measure) on versus off, than it is to switch to (and mesaure) one of 10 possible voltages.
– Aaron F
Nov 26 at 16:20




5




5




We don't know of any technology which would allow a non-binary discrete computer to be more efficient than its binary equivalent. If we did, non-binary would quickly be adopted. What you need is a universe with physics that make an efficient three-state device possible - where you naturally get three states and would need to waste one of those states, at an efficiency cost, to produce a binary system. This is opposite to the condition now where we have efficient two-state devices and need to invent some way to represent three states at a higher level to produce a non-binary system.
– J...
Nov 26 at 17:03




We don't know of any technology which would allow a non-binary discrete computer to be more efficient than its binary equivalent. If we did, non-binary would quickly be adopted. What you need is a universe with physics that make an efficient three-state device possible - where you naturally get three states and would need to waste one of those states, at an efficiency cost, to produce a binary system. This is opposite to the condition now where we have efficient two-state devices and need to invent some way to represent three states at a higher level to produce a non-binary system.
– J...
Nov 26 at 17:03




5




5




Just have one idiot who worked on early computers pick that, and the rest should follow. We do a lot of dumb things just because it's the "standard", in particular weights and measures (rotation has 360 degrees, time has 60 minutes, etc, due to ancient Sumeria's weird number system). As soon as enough people learn a system it becomes very hard to change, regardless of other advantages.
– Bert Haddad
Nov 27 at 23:37




Just have one idiot who worked on early computers pick that, and the rest should follow. We do a lot of dumb things just because it's the "standard", in particular weights and measures (rotation has 360 degrees, time has 60 minutes, etc, due to ancient Sumeria's weird number system). As soon as enough people learn a system it becomes very hard to change, regardless of other advantages.
– Bert Haddad
Nov 27 at 23:37










24 Answers
24






active

oldest

votes

















up vote
42
down vote













Non binary computers, in particular ternary computers, have been built in the past (emphasis mine).




One early calculating machine, built by Thomas Fowler entirely from wood in 1840, operated in balanced ternary. The first modern, electronic ternary computer Setun was built in 1958 in the Soviet Union at the Moscow State University by Nikolay Brusentsov, and it had notable advantages over the binary computers which eventually replaced it, such as lower electricity consumption and lower production cost.




If you want to make ternary computer the standards, I think you should leverage on those advantages: make energy more expensive, so that saving energy is a big advantage, and make production more expensive.



Note that, since smelting silicon is an energy intensive activity, already increasing the cost of energy will indirectly affect the production costs.






share|improve this answer

















  • 26




    L.Dutch - Although I answered differently I think the claim about trinary being energy saving is worth following up. Can you back this up with actual references and research? I'd be interested because I'm reluctant to accept it without being convinced. In particular I wonder if the cost of producing the trinary technology would offset the minor savings of using it.
    – chasly from UK
    Nov 26 at 14:00








  • 7




    It needed more memory when memory was expensive and limited. It demands more advanced components(3 states). It takes more time and knowledge to build them. And after binary had so much behind it is just too wasteful. There is no point to be better if you are too demanding and late.
    – Artemijs Danilovs
    Nov 26 at 15:42








  • 19




    From an information theoretic viewpoint, the most efficient base to compute in would be "e", but since that's not an integer, 3 would be the closest integer base.
    – Tangurena
    Nov 26 at 16:40






  • 11




    Also keep in mind that Setun was more efficient than binary computers largely because of its design - it came during a major transitional period where semiconductor diodes were just becoming available but transistors had not yet properly matured. They built Setun with diodes and magnetic cores (a system amenable to a three-state implementation) and this would be competing with vacuum tube based computers of the time. With transistor based electronics introduced this gap slammed shut - dramatically. Computers today are about a trillion times more efficient - that's a tough record to beat.
    – J...
    Nov 26 at 18:22






  • 3




    @Tangurena I honestly can't tell whether you're joking or just being mathematically deep. Nice comment either way...
    – leftaroundabout
    Nov 27 at 13:14


















up vote
34
down vote













Instead of avoiding it, transcend binary:



Either let the evolution of technology take its course and somehow create a demand for non-binary processors. Analogous to what is happening now in the crypto currency scene: The developers of IOTA based their project on a ternary architecture model and are even working on a ternary processor (JINN).



Or let aggressive patenting and licensing in the early stages of binary processors (e.g. a general patent for binary processors due to lobbying or misjudgements in the patent office) be the cause for starting work on non-binary processors with less restrictive and more collaborative patents.



Patentability requirements are: novelty, usefulness, and non-obviousness1.




[the] nonobviousness principle asks whether the invention is an
adequate distance beyond or above the state of the art2




So this could be used to have a patent granted on binary processors. And even if it was an illegitimate patent, that would be revoked in future lawsuits, this situation could give rise to non-binary processors.






share|improve this answer



















  • 23




    You should focus on that second point and expand it more, that sounds interesting.
    – kingledion
    Nov 26 at 14:41










  • Free/open hardware doesn't get monetized very well.
    – RonJohn
    Nov 26 at 14:42










  • @RonJohn That's right. I'll update the answer. Maybe less restrictive patenting/licensing.
    – mike
    Nov 26 at 14:45






  • 1




    Advanced quantum computers could be a good choice for option one.
    – Vaelus
    Nov 26 at 15:54






  • 2




    @JohnDvorak The basis may be binary, but the superpositions are not. While we measure the results of quantum computation as binary numbers, the actual computations are not themselves binary.
    – Vaelus
    Nov 26 at 16:37




















up vote
29
down vote













I would like to advance the idea of an analog computer.



Analog computers are something like the holy grail of electronics. They have the potential of nearly infinite more computing power, limited only by the voltgae or current measuring discriminator (i.e., the precision of measuring an electric state or condition).



The reason we don't have them is because using transistors in their switching mode is simple. Simple, simple, simple. So simple, that defaulting everything to the lowest common denominator (binary, single-variable logic) was obvious.



But even today, change is coming.




Analog computing, which was the predominant form of high-performance computing well into the 1970s, has largely been forgotten since today's stored program digital computers took over. But the time is ripe to change this. (Source)




 




If analog and hybrid computers were so valuable half a century ago, why did they disappear, leaving almost no trace? The reasons had to do with the limitations of 1970s technology: Essentially, they were too hard to design, build, operate, and maintain. But analog computers and digital-analog hybrids built with today’s technology wouldn’t suffer the same shortcomings, which is why significant work is now going on in analog computing in the context of machine learning, machine intelligence, and biomimetic circuits.



...



They were complex, quirky machines, requiring specially trained personnel to understand and run them—a fact that played a role in their demise.



Another factor in their downfall was that by the 1960s digital computers were making large strides, thanks to their many advantages: straightforward programmability, algorithmic operation, ease of storage, high precision, and an ability to handle problems of any size, given enough time. (Source)




But, how to get there without getting hung up on the digital world?




  • A breakthrough in discrimination. Transistors, for all their value, are only as good as their manufacturing process. The more precisely constructed the transistor, the more precise the voltage measurement can be. The more precise the voltage measurement, the greater the programatic value of a change in voltage = faster computing and (best of all for most space applications) faster reaction to the environment.


  • Breakthrough in modeling equations. Digital computers are, by comparison, trivial to program (hence, BASIC). Their inefficiency is irrelevant compared to their ease of use. However, this is because double-integration is a whomping difficult thing to do on paper, much less to describe such that a machine can process it. But, what if we could have languages like Wolfram, R, or Haskell without having to go through the digital revolution of BASIC, PASCAL, FORTRAN, and C first? Our view of programming is very much based on how we perceive (or are influenced by) the nature of computation. Had someone come up with an efficient and flexible mathematical language before the discovery of switching transistors... the world would have changed forever.



Would this entirely remove digital from the picture?



Heck, no. That's like saying the development of a practical Lamborghini (if the word practical can ever be applied to a Lamborghini) before, say, the Edsel would mean we would have never seen the Datsun B210. The single biggest weakness of analog computing is the human-to-machine interface. The ability to compute in real time rather than through a series of discrete often barely related steps is how our brains work — but that doesn't translate well to telling a machine how to do its job. The odds are good that a hybrid machine (digital interface to an analog core) would be the final solution (as it will today). Is this germain to your question? Not particularly.



Conclusion



Two breakthroughs: one in transistor manufacture and the other in symbolic programming, are all that would be needed to advance analog computation with all of its limitless computational power over digital computing.






share|improve this answer





















  • It's happening, although slowly: scientificamerican.com/article/…
    – Jan Dorniak
    Nov 26 at 20:39






  • 8




    If Neural Networks had been better developed before digital surpassed analog, perhaps the energy savings of analog neural networks would prevent binary's triumph. This change might have happened if only Marvin Minsky had discovered the potential of backpropagation in his book "Perceptrons", rather than focusing on neural network's limitations.
    – AShelly
    Nov 26 at 23:40






  • 2




    The source seems pretty biased. The largest analog computer setup I'm aware of was the Dutch "Deltar" simulation of the national flood barrier system. While it as used in the 70's, it was already outdated at the time. Its design dated back to the 40's, and it was built in the 60's. And very importantly, it was not general-purpose at all. It wasn't even domain-general; it simulated the Dutch water system and nothing else.
    – MSalters
    Nov 27 at 15:35










  • The Lamborghini 2.86 DT: lectura-specs.com/en/model/agricultural-machinery/… seems quite practical to me, but I do get your point.
    – BentNielsen
    Nov 29 at 0:43


















up vote
14
down vote













Having thought about this and looked at L.Dutch's answer, I may withdraw my original answer (or leave it just for interest).



Instead I will give a political answer.



As mentioned by L.Dutch, the Soviets came up with a ternary system (see below). Because of the limited use of the Russian language throughout the world the Soviets often resented the fact that US scientific papers got more credence - after all English is the Lingua Franca of science. (This is true by the way, not a fiction, I'll look for references).



Suppose the Russians had won a war over the West. It was common in Soviet Russia for science to be heavily politicised (again I'll look for references). Therefore, regardless of the validity of a non-binary system the Russians could have mandated ternary or some other base simply as a form of triumphalism.



Note - I'm chickening out of finding references at the moment. I've found some but they involve delving into Marxist doctrine or buying an expensive book. My personal knowledge of the situation came from talking to a British scientist who was digging through old Russian papers looking for bits that had been missed or had been distorted by doctrine. Maybe I'll delve further but not right now.






The first modern, electronic ternary computer Setun was built in 1958
in the Soviet Union at the Moscow State University by Nikolay
Brusentsov



https://en.wikipedia.org/wiki/Ternary_computer







share|improve this answer























  • This would hardly be a minimal change.
    – mike
    Nov 26 at 14:56






  • 3




    @mike - It's not a small change but that doesn't exclude it being a minimal one, unless you can think of a smaller political change, in which case go ahead.
    – chasly from UK
    Nov 26 at 14:58








  • 3




    I agree that in can be minimal in a political solution space. I hereby withdraw my comment :D
    – mike
    Nov 26 at 15:12










  • A more minimal change could be ternary computing becoming widespread in the Eastern bloc (and perhaps China but that would involve changing up the Sino-Soviet split causing ripple effects). Later on, the transition to a freer economy (either by reform or collapse of the USSR) could lead to ternary computers being widespread without something as drastic as WWIII.
    – 0something0
    Nov 30 at 6:03




















up vote
12
down vote













A ternary system would be preferred in a world where data storage cost exceeds all other cost considerations in computers. This preference would be due to radix economy, which essentially quantifies the relative cost of storing numbers in a particular numbering system. Euler's number e ≈ 2.718 has the lowest radix economy. Among integers, 3 has the lowest radix economy, lower than 2 and 4 (which have the same).



If the first storage medium used for computing would have stored ternary digits for less or just slightly more cost than binary digits, and if processing cost would have been insignificant compared to storage cost, ternary computing might have become the dominant standard. The advantage of ternary systems is small (around 5 percent), but could be important if storage cost was a serious consideration.



Binary computers dominate today mostly because electricity was the first effective medium to store and process numbers, and a single threshold voltage to distinguish between two states is easier to manage than two or more thresholds for three or more states.



Build your transistors in a medium that can store and process ternary digits efficiently, and emphasize on the high cost of storing. A mechanical example would be a switch that can take three positions in a triangle.






share|improve this answer





















  • Very interesting!
    – kingledion
    Nov 27 at 15:11


















up vote
10
down vote













Toolforger has one thing right: Binary computers are the most efficient computing devices possible. Period. Ternary has no technological advantage, whatsoever.



However, I'm going to give a suggestion of how you can offset the disadvantage of ternary computing, to allow your society to actually use ternary computers instead of binary ones:



Your society has evolved to use a balanced numeral system.



Balanced numeral systems don't just use positive digits like we do, they use an equal number of negative and positive digits. As such, balanced ternary uses three digits for -1, 0, and 1 instead of the unbalanced 0, 1, and 2. This has several beneficial consequences:




  • Balanced numeral systems have symmetries that unbalanced systems lack. Not only can you exploit commutativity when doing calculations (you know what 2+3 is, so you know what 3+2 is), but also symmetries based on sign: -3-2 = -(3+2), -3*2 = 3*-2, -3*-2 = 3*2, and 3*-2 = -(3*2).


  • You have more computations with trivial outcome: x+(-x) = 0 and -1*x = -x.



  • The effect is, that you have much less to learn when learning balanced numeral systems. For instance, unbalanced decimal requires you to learn 81 data points by heart to perform all the four basic computations, whereas balanced nonal (9 digits from -4 to 4) requires only 31 data points, of which only 6 are for multiplication. The right-most column uses `-4 = d, -3 = c, -2 = b, and -1 = a as negative digits:



    2*2 = 0*9 +4 =  4
    2*3 = 1*9 -3 = 1c
    2*4 = 1*9 -1 = 1a
    3*3 = 1*9 +0 = 10
    3*4 = 1*9 +3 = 13
    4*4 = 2*9 -2 = 2b


    The entire rest is either trivial or follows from symmetries. That's all the multiplication table your school kids need to learn!



  • Because you can get both positive and negative carries, you get much less and smaller carries in long additions. They simply tend to cancel each other out.


  • Because you have negative digits as well as positive ones, negative numbers are just an integral part of the system. In decimal, you have to decide which number is greater when doing a subtraction, then subtract the smaller number from the larger one, then reattach a sign to the result based on which of the two numbers was greater. In balanced systems you don't care which number is greater, you just do the subtraction. Then you look at the result and see whether it's positive or negative...



As a matter of fact, I once learned to use balanced nonal just for fun, and in general, it's indeed much easier to use than decimal.



My point is: To anyone who has been brought up calculating in a balanced numeral system, an unbalanced system would just feel so unimaginable awkward and cumbersome that they will basically think that ternary is the smallest base you can use. Because binary lacks the negative digits, how are you supposed to compute with that? What do you do when you subtract 5 from 2? You absolutely need a -1 for that!



As such, a society of people with a balanced numeral system background may conceivably settle on balanced ternary computers instead of binary ones. And once a chunk of nine balanced ternary digits has been generally accepted as the smallest unit of information exchange, no one will want to use 15 bits (what an awkward number!) to transmit the same amount of information in a binary fashion, with all the losses that would imply.



The result is basically a lock-in effect to balanced ternary that would keep people from using binary hardware.





Aside: Unbalanced decimal vs. balanced nonal



Here is a more detailed comparison between decimal and balanced nonal. I'm using a, b, c, d as the negative digits -1, -2, -3, -4 here, respectively:





  • Negation



    Here the learing effort for decimal is zero. For balanced nonal, you have to learn the following table with nine entries:



            | d c b a 0 1 2 3 4
    --------+------------------
    inverse | 4 3 2 1 0 a b c d



  • Addition



    Decimal has the following addition table, the right table show the 45 entries that need to be learned:



    + | 0  1  2  3  4  5  6  7  8  9    + | 0  1  2  3  4  5  6  7  8  9
    --+----------------------------- --+-----------------------------
    0 | 0 1 2 3 4 5 6 7 8 9 0 |
    1 | 1 2 3 4 5 6 7 8 9 10 1 | 2
    2 | 2 3 4 5 6 7 8 9 10 11 2 | 3 4
    3 | 3 4 5 6 7 8 9 10 11 12 3 | 4 5 6
    4 | 4 5 6 7 8 9 10 11 12 13 4 | 5 6 7 8
    5 | 5 6 7 8 9 10 11 12 13 14 5 | 6 7 8 9 10
    6 | 6 7 8 9 10 11 12 13 14 15 6 | 7 8 9 10 11 12
    7 | 7 8 9 10 11 12 13 14 15 16 7 | 8 9 10 11 12 13 14
    8 | 8 9 10 11 12 13 14 15 16 17 8 | 9 10 11 12 13 14 15 16
    9 | 9 10 11 12 13 14 15 16 17 18 9 | 10 11 12 13 14 15 16 17 18


    The same table for balanced nonal only has 16 entries that need to be learned:



    + | d  c  b  a  0  1  2  3  4    + | d  c  b  a  0  1  2  3  4
    --+-------------------------- --+--------------------------
    d |a1 a2 a3 a4 d c b a 0 d |
    c |a2 a3 a4 d c b a 0 1 c |
    b |a3 a4 d c b a 0 1 2 b |
    a |a4 d c b a 0 1 2 3 a |
    0 | d c b a 0 1 2 3 4 0 |
    1 | c b a 0 1 2 3 4 1d 1 | 2
    2 | b a 0 1 2 3 4 1d 1c 2 | 1 3 4
    3 | a 0 1 2 3 4 1d 1c 1b 3 | 1 2 4 1d 1c
    4 | 0 1 2 3 4 1d 1c 1b 1a 4 | 1 2 3 1d 1c 1b 1a


    Note the missing diagonal of zeros (a number plus its inverse is zero), and the missing upper left half (the sum of two numbers is the inverse of the sum of the inverse numbers).



    For instance, to calculate b + d, you can easily derive the result as b + d = inv(2 + 4) = inv(1c) = a3.




  • Multiplication



    In decimal, you have to perform quite a bit of tough learning:



    * | 0  1  2  3  4  5  6  7  8  9    * | 0  1  2  3  4  5  6  7  8  9
    --+----------------------------- --+-----------------------------
    0 | 0 0 0 0 0 0 0 0 0 0 0 |
    1 | 0 1 2 3 4 5 6 7 8 9 1 |
    2 | 0 2 4 6 8 10 12 14 16 18 2 | 4
    3 | 0 3 6 9 12 15 18 21 24 27 3 | 6 9
    4 | 0 4 8 12 16 20 24 28 32 36 4 | 8 12 16
    5 | 0 5 10 15 20 25 30 35 40 45 5 | 10 15 20 25
    6 | 0 6 12 18 24 30 36 42 48 54 6 | 12 18 24 30 36
    7 | 0 7 14 21 28 35 42 49 56 63 7 | 14 21 28 35 42 49
    8 | 0 8 16 24 32 40 48 56 64 72 8 | 16 24 32 40 48 56 64
    9 | 0 9 18 27 36 45 54 63 72 81 9 | 18 27 36 45 54 63 72 81


    But in balanced nonal, the table on the right is reduced heavily: The three quadrants on the lower left, the upper right and the upper left all follow from the lower right one via symmetry.



    * | d  c  b  a  0  1  2  3  4    * | d  c  b  a  0  1  2  3  4
    --+-------------------------- --+--------------------------
    d |2b 13 1a 4 0 d a1 ac b2 d |
    c |13 10 1c 3 0 c a3 a0 ac c |
    b |1a 1c 4 2 0 b d a3 a1 b |
    a | 4 3 2 1 0 a b c d a |
    0 | 0 0 0 0 0 0 0 0 0 0 |
    1 | d c b a 0 1 2 3 4 1 |
    2 |a1 a3 d b 0 2 4 1c 1a 2 | 4
    3 |ac a0 a3 c 0 3 1c 10 13 3 | 1c 10
    4 |b2 ac a1 d 0 4 1a 13 2b 4 | 1a 13 2b


    For instance, to calculate c*d, you can just do c*d = 3*4 = 13. Or for 2*b, you derive 2*b = inv(2*2) = inv(4) = d. It's really a piece of cake, once you are used to it.




Taking this all together, you need to learn




  • for decimal:

    0 inversions

    45 summations

    36 multiplications
    Total: 81


  • for balanced nonal:

    9 inversions

    16 summations

    6 multiplications
    Total: 31







share|improve this answer























  • I can't quite figure how you get 81 data points for our decimal system. Care to elaborate?
    – Wildcard
    Nov 28 at 0:21






  • 3




    @Wildcard It's 45 data points for addition (1+1, 1+2, 1+3, ..., 2+2, 2+3, ..., 9+9) and 36 data points for multiplication (2*2, 2*3, 2*4, ..., 3*3, 3*4, ..., 9*9). I have removed the trivial additions with zero, the trivial multiplications with zero and one, and the half of the table that follows from commutativity.
    – cmaster
    Nov 28 at 9:05






  • 1




    @Wildcard For balanced nonal, it's similar, except that you need to add 9 data points for inversion. The summation table is reduced by the trivial additions with zero, the trivial additions that yield zero (x+(-x) = 0), by commutativity, and the sign symmetry (-x+(-y) = -(x+y)), so only 16 data points remain. For multiplication, since we already know inversion, multiplication with -1, 0, and 1 is trivial, multiplications with a negative factor follow from symmetry, so we are only left with the table for the digits 2, 3, and 4. Which yields the six data points I've shown in my answer.
    – cmaster
    Nov 28 at 9:12










  • @Wildcard I have now updated my answer with a more detailed comparison. Hope you like it.
    – cmaster
    Nov 28 at 16:16


















up vote
9
down vote













Base-4



This might be a natural choice for a society that perfected digital communication before digital computation.



Digital signals are often transmitted (i.e., “passed through the analogue world”) using quadrature phase-shift keying, a special form of quadrature amplitude modulation. This is generally more performant and reliable than simple amplitude modulation, and more efficient than frequency modulation.



QPSK / QAM by default use four different states, or a multiple of four, as the fundamental unit of information. We usually interpret this as “it always transmits two bits at a time”, but if this method were to be standard before binary computers, we'd probably be used to measure information in quats (?) rather than bits.



Ultimately, the computers would at the lowest level probably end up looking a lot like our binary ones, but with usually two bits paired together to a fundamental “4-logical unit”. Unlike binary-coded decimal, this doesn't incur any overhead of unused binary states.



And it could actually make sense to QPSK-encode even the local communication between processor and memory etc. – wireless transmission everywhere!, thus making the components “base-4 for all that can be seen”.






share|improve this answer























  • What might be considered a related interesting factoid is that DNA genetic code is also base-4. But, I wouldn't say this should have any relevance upon the development of computers.
    – leftaroundabout
    Nov 27 at 13:59










  • Re your comment: biological computers, perhaps?
    – Wildcard
    Nov 28 at 0:18










  • QAM waveforms are very different from binary PCM, though, and QAM can encode many numbers of states, not just 4
    – endolith
    Nov 28 at 18:56






  • 1




    @endolith yeah, but QAM usually encodes some 2²ⁿ states, i.e. a multiple of four, doesn't it? And anyways before computers, they'd plausibly not go over four states, i.e. QPSK – anything more is basically just exploiting excessive SNR to get higher data rate, but prior to computers they'd probably just engineer the SNR to be just enough for QPSK, which already gives the essential advantage.
    – leftaroundabout
    Nov 28 at 23:36


















up vote
6
down vote













It's almost completely irrelevant.



The binary nature of computers is very very very rarely relevant in practice. Just about the only practical situation where the binary nature of computers is relevant is when doing sophisticated error bounds analysis of floating point calculations.



In actual reality, there are quite a few aspects of modern computing which do not even rely on binary representations. For example, we all like SSDs, don't we? Well, modern cheap consumer SSDs are not binary devices — they use multi-level cells as their fundamental building blocks. For another example, we all like Gigabit Ethernet, don't we? Well, the unit of transmission in Gigabit Ethernet is an indivisible 8-bit octet (transmitted as a 10-bit symbol, but hey, who counts).



No modern computer (all right, hardly any modern computer) can access one bit of storage individually. Most usually, the smallest accessible unit is an octet of eight bits, which can be seen as an indivisible atom with 256 possible values. (And even this is not really true; what exactly is the atomic unit of memory access varies from architecture to architecture. Access to one individual bit is not atomic on any computer I know of.)



Donald Knuth's Art of Computer Programming, which the closest thing we have to a fundamental text in informatics, famously uses the fictional MIX computer for practical examples — and one of the charming characteristics of MIX is that one does not know whether it's a binary or a decimal computer.



What actually matters is that modern computers are digital — in the computer, everything is a number. That the numbers in question are represented by tuples of octets is a detail which very rarely has any practical or even theoretical importance.






share|improve this answer























  • I don't see how this answers the question. In OP's world, the base is important enough that it needs a back story. At best, this should be a short comment on the question.
    – pipe
    Nov 27 at 9:58












  • @pipe: The point is that the physical representation is irrelevant, and it cannot be made relevant. (And, unlike other fields where I provide answers or comments on this site, this is actually my profession.) FYI, base 2 is relevant only when referring to the physical representation; any computer can store and manipulate numbers in any base you wish. For example, base 10,000 is moderately popular for arbitary-precision computations.
    – AlexP
    Nov 27 at 11:12








  • 1




    Yes, I'm well aware that the base is not "important", but apparently OP thinks that the physical representation is interesting, interesting enough to want to build a world around it. Obviously such a computer could work in any base as well. Also, a lot of things in our world today is designed the way they are because computers are predominately base 2, for example ANSI art (256 codepoints, 16 colors), 16-bit CD audio, JPEG quantization blocks being 8x8 pixels affecting the quality of all images online, etc..
    – pipe
    Nov 27 at 12:16










  • @pipe: On the other hand, the Web-safe color palette has 6³ = 216 colors, the CD sampling rate is 44,100 samples/second, bandwidth is measured in powers of ten bits per second, Unicode has room for 1,114,112 = 17×2⁴ code points...
    – AlexP
    Nov 27 at 12:23


















up vote
5
down vote













Get rid of George Boole, inventor of Boolean Algebra, probably the main mathematical foundation of computer logic.



Without Boolean Algebra, regular algebra would give quite an edge to decimal computers, even if you needed three to four times as much hardware per digit.



There's no need to kill him, just have something happen that stops his research or get him interested in another field instead.






share|improve this answer





















  • Comments are not for extended discussion; this conversation has been moved to chat.
    – L.Dutch
    Nov 27 at 19:27


















up vote
5
down vote













EDIT - On reading the answer by L.Dutch, I see that there is an energy-saving argument for using trinary. I'd be interested to find out how theoretically true that is. Crucially the OP talks about transistors rather than thermionic valves and that could make a difference. There are also other energy questions to address other than the simple switching of a transistor. It would be good to know the extent of this saving and any extra cost associated with building and maintaining the hardware. Heat dissipation may also be an issue.



I remain open-minded as well as interested in this approach.





I don't think there is a historical justification for your premise as far as transistors are concerned so instead, I will just say:



The minimum historical change is No Electronics



It's possible to use other bases but just a really bad idea.



IBM 1620 Model I, Level H




IBM 1620 data processing machine with IBM 1627 plotter, on display at
the 1962 Seattle World's Fair The IBM 1620 was announced by IBM on
October 21, 1959,[1] and marketed as an inexpensive "scientific
computer".[2] After a total production of about two thousand machines,
it was withdrawn on November 19, 1970. Modified versions of the 1620
were used as the CPU of the IBM 1710 and IBM 1720 Industrial Process
Control Systems (making it the first digital computer considered
reliable enough for real-time process control of factory
equipment)[citation needed].



Being variable word length decimal, as opposed to
fixed-word-length pure binary, made it an especially attractive first
computer to learn on – and hundreds of thousands of students had their
first experiences with a computer on the IBM 1620.



https://en.wikipedia.org/wiki/IBM_1620




The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level.



Reasoning



Any other electronic system than binary will soon evolve into binary because it depends on digital electronics.



It is commonly supposed, by those not in the know, that zero voltage represent a binary zero and some arbitrary voltage, e.g. 5 volts, represents a 1. However in the real world these voltages are never so precise. It is much easier to have two ranges with a specified changeover point.



Having to maintain say ten different voltages for ten different digits would be incredibly expensive to make, unreliable and not worth the effort.



So your minimum historical change is No Electronics.






share|improve this answer



















  • 3




    This answer does not meet the question's "Transistors were invented in this alternate timeline, in the 1950s" constraint.
    – RonJohn
    Nov 26 at 14:29






  • 2




    @RonJohn - I understand your point. I suppose I could have answered "Given the conditions you propose, there is no historical answer to your question." Maybe I'll change the wording to add that extra sentence.
    – chasly from UK
    Nov 26 at 14:38










  • The electronics of the IBM1620 were entirely binary. The decimal capabilities depended on they way real-world data was encoded in binary, not on hardware that somehow used 10 different states to represent decimal numbers.
    – alephzero
    Nov 27 at 23:46










  • @alephzero - If you read my answer carefully you'll see I said, "The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level."
    – chasly from UK
    Nov 28 at 0:39




















up vote
4
down vote













As I understand it, early tribes used base 12 and it's a lot more flexible than 10--they had a way to count to 12 by counting knuckles to get up to 60 on two hands pretty easily which is the basis of our "Degrees".



10-finger-counters supposedly defeated the base 12ers but kept their time system and degree-based trigonometry.



If the base 12ers had won, a three-state computer might have made a LOT more sense (Binary might have actually looked silly). In this case A byte would probably be 8 tri-state bits (let's call it 8/3) which would perfectly fit 2 base-12 digits instead of our 8/2 layout which always had a bit of a mis-match.



We tried to cope with our mismatch by using BCD and throwing away 6 states from each nibble (1/2 byte) for a more close approximation of base 10 which gave us a "Pure" math without all these weird binary oddities you get (like how in base 10, 1 byte holds 256 states, 2 bytes hold 65536, etc)



With 3/8, base 12ers would have no mismatch, it would be really clean. Round 3-bit numbers would often look like nice base12 numbers: 1 byte would hold 100 states, and 2 bytes would hold 10000, etc.



So can you change the numeric base of your book? Shouldn't come up too often :) It would be fun to even number pages in base 12... complete submersion.






share|improve this answer





















  • Base 6 as almost all of the advantages of Base 12, but requires a much smaller addition and multiplication tables. And it can be built by pairing binary elements with ternary elements. Also, 6^9 ~ 10^7 (10,077,696).
    – Jasper
    Nov 27 at 20:42


















up vote
3
down vote













Decimal computers.



Modern computers are, indeed, binary. Binary is the classification of an electrical signal as occupying one of two states, conditional on the voltage. For the sake of simplicity, you could say that in a 5V system, anything signal above 4V is a '1' and everything else is a '0'. Once a signal has been confined to two states, it's pretty easy to apply Boolean math, which was already well-explored ahead of computers. Binary was an easy choice for computers because so much work was already done in the area of Boolean algebra.



When we needed to increase the range of numbers, we added more signals. Two signals (two bits) could represent 4 distinct values. 3 bits - 8 values, and so-on. But what if, instead of adding more signals to expand our values, we simply divided the existing signals up more. In a 5V system, one signal could represent a number from 1-10 if we divide up the voltage. 0-0.25 volts = 0. 0.25-0.50 volts = 1. 0.50-0.75 volts = 2, etc. In theory, each signal would carry 5x the data a binary signal could. But why stop there? Why not split each signal into 100 distinct values?



Well, for the same reason we never went further than binary - environmental interference and lack of precision components. You need to be able to precisely measure the voltages to determine the value, and if those voltages change, your system becomes unreliable. All types of factors can affect electrical voltages, RF, temperature, humidity, metal density, etc. As components age, their tolerances tend to decrease.



Any number of things could have changed this - if you use a different medium - light, for example, interference isn't a concern. This is exactly why fiber-optics can carry so much more data than electrical connections.



The discovery of a room-temperature superconductor could also have allowed different computers to become standard. A superconductor doesn't lose electrons to heat. This means you could pump more voltage through a system without fear of overheating, requiring less precise components and less (no) cooling.



So, in-short, binary computers dominate because of physical limitations related to electricity and the the wealth of knowledge (Boolean Algebra) that was already available when vacuum tubes, transistors, and semiconductors came about. Change any of those factors, and binary computers may never have been.






share|improve this answer




























    up vote
    3
    down vote













    In the late 1950s analog computers were developed using a hydraulic technology called fluidics. Fluidic processing is still used in automatic transmissions, although newer designs are hybrid electronic/fluidic systems.






    share|improve this answer

















    • 2




      Can you explain more about this? Describe fluidic processing, link to more info, and then explain why it might have surpassed binary digital computing?
      – kingledion
      Nov 27 at 15:20


















    up vote
    2
    down vote













    Hypercomputation



    According to Wikipedia Hypercomputation is defined to be the following:




    Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.



    The Church–Turing thesis states that any "effectively computable" function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not effectively computable in the Church–Turing sense.



    Technically the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of useful, rather than random, uncomputable functions.




    What this means is that Hypercomputation can do things computers cannot do. Not in terms of scope limitations such as the ability to access things on a network but rather what can and cannot be fundamentally solved as a mathematical problem.



    Consider this. Can a computer store the square root of 2 and operate on it? Well maybe because it could store the coefficients of the polynomial whose solution is that square root and then index the solutions to that polynomial. Alright, so we can the represent so called algebraic numbers (at least I believe so). What about all real numbers? Euler's constant and pi are likely candidates for being unrepresentable in any meaningful sense using binary. We can approximate but we cannot have perfect representations. We could have pi be a special symbol as well as e and just increase the symbolic set. Still not good enough. That's the primary thing that hops to mind to me at least. The ability to digitally compute any real number with perfect precision.



    This would be a reason for such a society to never discover binary computers being useful. At some point we switched from analog to binary because of electrical needs and signal stuff. I honestly do not know the details. We modeled the modern notion of processor and other things loosely off of the notion of a Turing Machine which was ultimately the form way of discussing computability which was kind of a multi faceted convergence of sorts. There was the idea of something being human computable and then theoretically computable. The rough abstract definition used for many years ended up converging with that of the notion of the Turing Machine. There was also the set theory concept of something or other (I don't recall the name) that ended up also converging to defining the same exact same concept of "computable". All of these converging basically meant it was said and done. That is what we as a society (or even as the human race for that matter) were able to come up with as a notion of what is and is not programmable. However, that is the convergence of possibly over 3000 years of mathematical development possibly beginning as far back in concept as Euclid when he formalized the most basic concepts of theorems and axioms. Sure math existed but it was just a tool. Nobody had a formal notion of it. Things are just obvious and known. If Hypercomputation is possible for humans to do (rather than it just being a thing limited to machines) then all it would take is one genius in the entire history of math to crack that. I'd say it is a reasonable thing for an alternate history.






    share|improve this answer




























      up vote
      1
      down vote













      Base-10 computing machines were used commercially to control the early telephone switching system. The telephone companies used them because they were solving a base-10 problem. As long as transistors remain larger and more expensive than mechanical relays, then there's no reason for telephone switchboards to switch to binary.



      But that's cheating the spirit of the question. Suppose cheap transistors are invented. Then how can a civilization get out of binary computing? Binary logic is the best way to build a electronic deterministic computer with cheap transistors.



      Answer: Analog neural networks outperform manually-programmed computers.



      Humans are bad at programming computers directly. Manually-programmed computers can perform only simple unambiguous tasks. Statistical programming, also called "machine learning", can answer questions without clear mathematical answers. Machine learning can answer questions like "is this a picture of a frog". Hand-coding an algorithm to determine "is this a picture of a frog" is well beyond the capabilities of human beings. So are more complex tasks like "enforce security at this railroad station" and "take care of my grandmother in her old age".



      Manually-programmed software outnumbers neural-network-based software right now, but that might plausibly be just a phase. Manually-programmed software is easier to create. In a few hundred years, neural-network-based software might outnumber manually-programmed so



      One of the most promising methods avenues of machine learning involves neural networks, which uses ideas copied from biological brains. If we invent a good general-purpose AI then it might take the form of a neural network, especially if the AI is based off of the human brain.



      If you're designing a computer to execute traditional programs then binary is the best way to go. But if the goal of a microchip is to simulate a human brain then it may be inefficient to build a binary computer and then simulate a human brain on it. It might make more sense to build a neural network into hardware directly. The human brain is an analog device, so a microchip based off of the human brain may be an analog device too.



      If someone figured out how to build a powerful general-purpose AI as an analog neural network then chips optimized for neural networks may largely replace binary computers.






      share|improve this answer




























        up vote
        1
        down vote













        One simple change would be to make solid-state electronics impossible. Either your planet doesn't have abundant silicon, or there is some chemical issue which makes it uneconomic to manufacture semiconductors.



        Instead, consider what would happen if Charles Babbage's mechanical computer designs (which were intrinsically decimal devices, just like the mechanical calculators which already existed in Babbage's day) were scaled down to nano-engineering size.



        The earliest computers used vacuum tube electronics not semiconductors. The basic design of vacuum tube memory circuits was already known by 1920, long before the first computers, but for large scale computer memory tubes would have been prohibitively large, power-hungry, and unreliable. The earliest computers used various alternative systems - some of which were in effect mechanical, not electrical. So the notion of totally mechanical computers does have some relation to actual history.






        share|improve this answer




























          up vote
          1
          down vote













          Morse code rules.



          telegraph
          https://www.kaspersky.com/blog/telegraph-grandpa-of-internet/9034/



          Just as modern keyboards retain the QWERTY of the first typewriters, in your world the trinary code of Morse becomes the language of computers. Computers developed to rapidly send and receive messages naturally use this language to send messages other than written language, and then to communicate between parts of themselves.



          There are apparently technical reasons making binary more efficient. https://www.reddit.com/r/askscience/comments/hmy7w/if_morse_is_more_efficient_than_binary_why_dont/



          I am fairly certain that there would be more efficient setups than QWERTY as well, but now many decades since there was a need for keys to be spatially distant there is still QWERTY. So too Morse in your world. It was always the language of computers, and endures as such.






          share|improve this answer

















          • 1




            Interesting. You could merge this with one of the reasons for ternary mathematics to provide a more robust explanation for ternary computers.
            – kingledion
            Nov 28 at 2:16










          • But Morse code is binary
            – endolith
            Nov 30 at 17:20










          • @endolith - Morse has dot, dash, and space.
            – Willk
            Nov 30 at 21:56










          • @Willk Which are made up of on and off.
            – endolith
            Dec 1 at 7:09










          • @endolith - listen to some morse code. The length of the "on" is what distinguishes dot from dash. Spaces must be put in between, But don't take it from me. Read up: cs.stackexchange.com/questions/39920/…
            – Willk
            Dec 1 at 17:40


















          up vote
          1
          down vote













          Oh, man. Although non-binary computers would be extremely inconvenient, I can easily imagine trinary, decimal or even analog computers becoming dominant, due to a terrible force all developers fear: entrenchment, and the need for legacy support. Lots of times in computing history, we've struggled with decisions made long ago, which we just couldn't break free of (for a long time) from sheer inertia. There's a lot of stuff even in modern processors which we never would have chosen, if it wasn't for the need to support existing software and architecture decisions.



          So for your scenario, I imagine that for some reason one type of non-binary computer got a head start. Maybe for many years, computers didn't improve all that much due to some calamity. But software was still written for these weak computers, extremely useful and good software, tons of it. By the time things got going again, it was just much more profitable to focus on making trinary (or whatever) better, rather than trying to redo all the work Ninetel put into their 27-trit processor.



          Sure, there are some weirdos claiming that binary is so much more sensible that it's worth it to make a processor that's BISC (binary instruction set circuit) in the bottom with a trinary emulation layer on top. But after the bankruptcy of Transbita, venture capital has mostly lost interest in these projects.






          share|improve this answer




























            up vote
            1
            down vote













            The creatures involved have three fingers, a ternary numeral system in everyday life, and the technical advantages of binary over ternary aren't as great as the advantages (radix economy, etc) of binary over decimal, so they never bothered to use a system other than the one they naturally knew innately?






            share|improve this answer























            • Creatures with three genders! Three parents.
              – Amarth
              2 days ago


















            up vote
            1
            down vote













            They made quantum computing work much more quickly than we have.



            Why have binary state, when you can have infinite?



            They probably had binary computers for a short time, then cracked quantum.




            What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?




            Someone cracked a cheap room temperature way to make qbits



            (ref: https://medium.com/@jackkrupansky/the-greatest-challenges-for-quantum-computing-are-hardware-and-algorithms-c61061fa1210)






            share|improve this answer






























              up vote
              1
              down vote













              Politically enforced decimal base-10, expressed as binary-coded decimal



              The most reasonable alternative to the binary computer (which is the most efficient), would be a decimal base 10 one.



              Suppose a government enforced computers to be decimal, since that system is most natural to humans. Perhaps they early on feared that computers would be restricted to an "elite" who understood binary and hex numbers, and wanted the technology to be accessible to everyone.



              Same argument as why the computer mouse was invented and became a success: it wasn't because it was faster to use, and certainly not because it was ergonomic. But it was easier to use. Computer history repeats ease of use as an argument: Windows won and became the dominant OS, and so on.





              A decimal computer could still be possible without changing the way computers work all that much - they would be using binary-coded decimal (BCD). Processors would use different OP codes and data would be stored differently in memories. But otherwise, transistors will still remain on or off. Boolean logic will remain true or false.



              Data would take up more space and calculations would be slower, but potentially it would be easier for humans to interpret raw data that way.



              Take for example the decimal number 99. If you just know that binary for 9 is 1001, then you could write 99 with BCD as 1001 1001. This is the way them nerdy binary watches work - they aren't actually using real binary base 2, but BCD, which is easier to read. Otherwise even the nerd would struggle to read the time.



              To actually express the number 99 in raw binary base 2, it would be 110 0011. Not nearly as readable for humans, though we saved one bit of data storage. To actually read this, a human will have to calculate it in decimal 64 + 32 + 0 + 0 + 0 + 2 + 1 = 99.






              share|improve this answer








              New contributor




              Amarth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.

























                up vote
                0
                down vote













                Binary computers are simply the most efficient ones, claims to the contrary nonwithstanding (they are based on pretty exotic assumptions, the linked claim that "the future lies in analog computers" is even hilariously wrong though I can see where Ulmann comes from).

                Binary computers are simply the most cost and space efficient option if the technology is based on transistors: A ternary computer would require more transistors than a binary one to store or process the same amount of information. The reason is that electrically, the distinction between "zero volts" and "five volts" is more like "anything below 2.0 volts" and "anything above 3.0 volts", which is much easier to control than ternary voltage levels such as "below 1.0 volts, between 2.0 and 3.0 volts, or between 4.0 and 5.0 volts". Yes you need the gaps between the voltage bands, because you need to deal with imprecisions due to noise (manufacturing spread and electric imprecisions); and yes the gaps are pretty large because the larger the gap, the more variance in the integrated circuits is inconsequential and the better is your yield (which is THE most important parameter of submicron manufacturing).



                How to get around this?

                Either change the driving parameters. In an economy where efficiency isn't even remotely relevant, you can choose convenience. Such an economy will instantly collapse as soon as it touches a more efficient one, so this requires either an isolated economy (Soviet-style or even North-Korea-style, though it requires some extra creativity to design a world economy where a massively less efficient economy isn't voted down on feet - historically this was enforced by oppressive regimes, it might be possible that the people stay at a lower level of income and goods for other reasons).

                Or claim basic components that are better at being trinary than transistors. Somebody with a better background in microelectronics than me might be able to propose something that sounds credible, or maybe something that isn't based on classic electrical currents: quantum devices, maybe, or something photonic.



                Why is this not done much in literature?

                Because, ultimately, it does not matter much whether you have a bits or trits. Either way, you bunch together as many of them as you need to represent N decimal digits. Software engineers don't care much, unless they are the ones that write the basic algorithms for addition/subtraction/etc., or the ones that write the algorithms that needs to be fast (i.e. those that deal with large amounts of data, whether it's a huge list of addresses, or the pixels on the screen).

                Some accidental numbers would likely change. Bunching 8 bits into a byte is helpful because 8 is a power of 2, that's why 256 (2^8) tends to pop up in the number of screen colors and various other occasions. With trinary computers, you'd likely use trytes, nine trits, giving 19683 values. HDR would be much later or not happen at all because RGB would already have more color nuances, so there would be some nonobvious differences.



                You can simply make it a background fact, never highlight it, just to avoid the explanation.

                Which begs the counter-question: What's the plot device you need trinary for?






                share|improve this answer

















                • 1




                  Your answer boils down to: "Your world isn't interesting", which isn't very helpful
                  – pipe
                  Nov 27 at 10:03






                • 2




                  "Bunching 8 bits into a byte is helpful because 8 is a power of 2" - which doesn't explain why many computers (even up to the 1980s) did not have 8 bits in a byte. Word lengths of 12, 14 and 18 bits were used, and later bigger numbers including 48 and 60 bits (divided into ten 6-bit "characters").
                  – alephzero
                  Nov 28 at 0:08












                • @alephzero The drive towards 8-bit bytes isn't a very strong one, admittedly. But eventually it did converge towards 8-bit bytes. Maybe the actual drive was that it was barely enough to keep an ASCII character, and that drive played out in times where you wouldn't want to "waste" an extra byte, and the idea to support multiple character sets was a non-issue because the Internet didn't exist yet. Still, I'm pretty sure some bit fiddling critically depends on being the bit count be a power of two... though I'd have trouble finding such an algorithm, admittedly.
                  – toolforger
                  Nov 28 at 7:13


















                up vote
                0
                down vote













                Cryptocurrency scammers having convinced sufficiently many big corporations and governments to become partners in their pyramid scheme that economies of scale make their inefficient and ridiculous ternary-logic hardware cheaper than properly-designed computers.






                share|improve this answer




























                  up vote
                  -1
                  down vote













                  The strength of binary is that it's fundamentally a yes/no logic system, the weakness of binary is that it is fundamentally a yes/no logic system, you need multiple layers of logic to create "yes and" statements with binary logic. The smallest change you would need to make to change away from binary (in terms of having the rest of the world being the same but computing being different) would be to have the people who pioneered the science of computers, particularly Turing (thanks @Renan) aim for, and demand, more complex arrays of basic logic outcomes (a, b, c, etc... vary combinations, all of the above, none of the above). Complex outcome options require more complex inputs, more complex logic gates and a more complex programming language: consequently computers will be more expensive, more delicate, and harder to program.



                  A few people might mess around with binary for really basic machines, like pocket calculators, but true computers will be more complex machines.






                  share|improve this answer



















                  • 1




                    You are looking for Alan Turing. He is the one who introduced binarism into computing, when describing the Turing machine.
                    – Renan
                    Nov 26 at 14:34










                  • @Renan Thanks that's what I thought but I couldn't remember if he got that from someone earlier or not.
                    – Ash
                    Nov 26 at 14:36










                  • Playing around with complex arrays still needs each element of the array to have a defined state. If transistors are used then the state would almost certainly still be represented in binary at some level. The equivalence of different types of universal computer has been proven.
                    – chasly from UK
                    Nov 26 at 14:44












                  • @chaslyfromUK Yes if transistors are used binary is mechanical inherent, but if transistor logic is insufficient to satisfy the fundamental philosophical and mechanical goals of the people building the system they can't be used, different circuitry will be required.
                    – Ash
                    Nov 26 at 14:53












                  • @Renan: You have a strange misconception about Turing machines. A Turing machine is defined by the alphabet (set of symbols) on the tape, the set of internal states, and the rules for state transition. It has nothing to do with numbers, binary or otherwise.
                    – AlexP
                    Nov 27 at 11:06











                  Your Answer





                  StackExchange.ifUsing("editor", function () {
                  return StackExchange.using("mathjaxEditing", function () {
                  StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
                  StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
                  });
                  });
                  }, "mathjax-editing");

                  StackExchange.ready(function() {
                  var channelOptions = {
                  tags: "".split(" "),
                  id: "579"
                  };
                  initTagRenderer("".split(" "), "".split(" "), channelOptions);

                  StackExchange.using("externalEditor", function() {
                  // Have to fire editor after snippets, if snippets enabled
                  if (StackExchange.settings.snippets.snippetsEnabled) {
                  StackExchange.using("snippets", function() {
                  createEditor();
                  });
                  }
                  else {
                  createEditor();
                  }
                  });

                  function createEditor() {
                  StackExchange.prepareEditor({
                  heartbeatType: 'answer',
                  convertImagesToLinks: false,
                  noModals: true,
                  showLowRepImageUploadWarning: true,
                  reputationToPostImages: null,
                  bindNavPrevention: true,
                  postfix: "",
                  imageUploader: {
                  brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
                  contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
                  allowUrls: true
                  },
                  noCode: true, onDemand: true,
                  discardSelector: ".discard-answer"
                  ,immediatelyShowMarkdownHelp:true
                  });


                  }
                  });














                  draft saved

                  draft discarded


















                  StackExchange.ready(
                  function () {
                  StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fworldbuilding.stackexchange.com%2fquestions%2f131296%2fwhat-is-the-most-reasonable-way-for-non-binary-computers-to-have-become-standard%23new-answer', 'question_page');
                  }
                  );

                  Post as a guest















                  Required, but never shown

























                  24 Answers
                  24






                  active

                  oldest

                  votes








                  24 Answers
                  24






                  active

                  oldest

                  votes









                  active

                  oldest

                  votes






                  active

                  oldest

                  votes








                  up vote
                  42
                  down vote













                  Non binary computers, in particular ternary computers, have been built in the past (emphasis mine).




                  One early calculating machine, built by Thomas Fowler entirely from wood in 1840, operated in balanced ternary. The first modern, electronic ternary computer Setun was built in 1958 in the Soviet Union at the Moscow State University by Nikolay Brusentsov, and it had notable advantages over the binary computers which eventually replaced it, such as lower electricity consumption and lower production cost.




                  If you want to make ternary computer the standards, I think you should leverage on those advantages: make energy more expensive, so that saving energy is a big advantage, and make production more expensive.



                  Note that, since smelting silicon is an energy intensive activity, already increasing the cost of energy will indirectly affect the production costs.






                  share|improve this answer

















                  • 26




                    L.Dutch - Although I answered differently I think the claim about trinary being energy saving is worth following up. Can you back this up with actual references and research? I'd be interested because I'm reluctant to accept it without being convinced. In particular I wonder if the cost of producing the trinary technology would offset the minor savings of using it.
                    – chasly from UK
                    Nov 26 at 14:00








                  • 7




                    It needed more memory when memory was expensive and limited. It demands more advanced components(3 states). It takes more time and knowledge to build them. And after binary had so much behind it is just too wasteful. There is no point to be better if you are too demanding and late.
                    – Artemijs Danilovs
                    Nov 26 at 15:42








                  • 19




                    From an information theoretic viewpoint, the most efficient base to compute in would be "e", but since that's not an integer, 3 would be the closest integer base.
                    – Tangurena
                    Nov 26 at 16:40






                  • 11




                    Also keep in mind that Setun was more efficient than binary computers largely because of its design - it came during a major transitional period where semiconductor diodes were just becoming available but transistors had not yet properly matured. They built Setun with diodes and magnetic cores (a system amenable to a three-state implementation) and this would be competing with vacuum tube based computers of the time. With transistor based electronics introduced this gap slammed shut - dramatically. Computers today are about a trillion times more efficient - that's a tough record to beat.
                    – J...
                    Nov 26 at 18:22






                  • 3




                    @Tangurena I honestly can't tell whether you're joking or just being mathematically deep. Nice comment either way...
                    – leftaroundabout
                    Nov 27 at 13:14















                  up vote
                  42
                  down vote













                  Non binary computers, in particular ternary computers, have been built in the past (emphasis mine).




                  One early calculating machine, built by Thomas Fowler entirely from wood in 1840, operated in balanced ternary. The first modern, electronic ternary computer Setun was built in 1958 in the Soviet Union at the Moscow State University by Nikolay Brusentsov, and it had notable advantages over the binary computers which eventually replaced it, such as lower electricity consumption and lower production cost.




                  If you want to make ternary computer the standards, I think you should leverage on those advantages: make energy more expensive, so that saving energy is a big advantage, and make production more expensive.



                  Note that, since smelting silicon is an energy intensive activity, already increasing the cost of energy will indirectly affect the production costs.






                  share|improve this answer

















                  • 26




                    L.Dutch - Although I answered differently I think the claim about trinary being energy saving is worth following up. Can you back this up with actual references and research? I'd be interested because I'm reluctant to accept it without being convinced. In particular I wonder if the cost of producing the trinary technology would offset the minor savings of using it.
                    – chasly from UK
                    Nov 26 at 14:00








                  • 7




                    It needed more memory when memory was expensive and limited. It demands more advanced components(3 states). It takes more time and knowledge to build them. And after binary had so much behind it is just too wasteful. There is no point to be better if you are too demanding and late.
                    – Artemijs Danilovs
                    Nov 26 at 15:42








                  • 19




                    From an information theoretic viewpoint, the most efficient base to compute in would be "e", but since that's not an integer, 3 would be the closest integer base.
                    – Tangurena
                    Nov 26 at 16:40






                  • 11




                    Also keep in mind that Setun was more efficient than binary computers largely because of its design - it came during a major transitional period where semiconductor diodes were just becoming available but transistors had not yet properly matured. They built Setun with diodes and magnetic cores (a system amenable to a three-state implementation) and this would be competing with vacuum tube based computers of the time. With transistor based electronics introduced this gap slammed shut - dramatically. Computers today are about a trillion times more efficient - that's a tough record to beat.
                    – J...
                    Nov 26 at 18:22






                  • 3




                    @Tangurena I honestly can't tell whether you're joking or just being mathematically deep. Nice comment either way...
                    – leftaroundabout
                    Nov 27 at 13:14













                  up vote
                  42
                  down vote










                  up vote
                  42
                  down vote









                  Non binary computers, in particular ternary computers, have been built in the past (emphasis mine).




                  One early calculating machine, built by Thomas Fowler entirely from wood in 1840, operated in balanced ternary. The first modern, electronic ternary computer Setun was built in 1958 in the Soviet Union at the Moscow State University by Nikolay Brusentsov, and it had notable advantages over the binary computers which eventually replaced it, such as lower electricity consumption and lower production cost.




                  If you want to make ternary computer the standards, I think you should leverage on those advantages: make energy more expensive, so that saving energy is a big advantage, and make production more expensive.



                  Note that, since smelting silicon is an energy intensive activity, already increasing the cost of energy will indirectly affect the production costs.






                  share|improve this answer












                  Non binary computers, in particular ternary computers, have been built in the past (emphasis mine).




                  One early calculating machine, built by Thomas Fowler entirely from wood in 1840, operated in balanced ternary. The first modern, electronic ternary computer Setun was built in 1958 in the Soviet Union at the Moscow State University by Nikolay Brusentsov, and it had notable advantages over the binary computers which eventually replaced it, such as lower electricity consumption and lower production cost.




                  If you want to make ternary computer the standards, I think you should leverage on those advantages: make energy more expensive, so that saving energy is a big advantage, and make production more expensive.



                  Note that, since smelting silicon is an energy intensive activity, already increasing the cost of energy will indirectly affect the production costs.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 26 at 13:29









                  L.Dutch

                  73.7k24178356




                  73.7k24178356








                  • 26




                    L.Dutch - Although I answered differently I think the claim about trinary being energy saving is worth following up. Can you back this up with actual references and research? I'd be interested because I'm reluctant to accept it without being convinced. In particular I wonder if the cost of producing the trinary technology would offset the minor savings of using it.
                    – chasly from UK
                    Nov 26 at 14:00








                  • 7




                    It needed more memory when memory was expensive and limited. It demands more advanced components(3 states). It takes more time and knowledge to build them. And after binary had so much behind it is just too wasteful. There is no point to be better if you are too demanding and late.
                    – Artemijs Danilovs
                    Nov 26 at 15:42








                  • 19




                    From an information theoretic viewpoint, the most efficient base to compute in would be "e", but since that's not an integer, 3 would be the closest integer base.
                    – Tangurena
                    Nov 26 at 16:40






                  • 11




                    Also keep in mind that Setun was more efficient than binary computers largely because of its design - it came during a major transitional period where semiconductor diodes were just becoming available but transistors had not yet properly matured. They built Setun with diodes and magnetic cores (a system amenable to a three-state implementation) and this would be competing with vacuum tube based computers of the time. With transistor based electronics introduced this gap slammed shut - dramatically. Computers today are about a trillion times more efficient - that's a tough record to beat.
                    – J...
                    Nov 26 at 18:22






                  • 3




                    @Tangurena I honestly can't tell whether you're joking or just being mathematically deep. Nice comment either way...
                    – leftaroundabout
                    Nov 27 at 13:14














                  • 26




                    L.Dutch - Although I answered differently I think the claim about trinary being energy saving is worth following up. Can you back this up with actual references and research? I'd be interested because I'm reluctant to accept it without being convinced. In particular I wonder if the cost of producing the trinary technology would offset the minor savings of using it.
                    – chasly from UK
                    Nov 26 at 14:00








                  • 7




                    It needed more memory when memory was expensive and limited. It demands more advanced components(3 states). It takes more time and knowledge to build them. And after binary had so much behind it is just too wasteful. There is no point to be better if you are too demanding and late.
                    – Artemijs Danilovs
                    Nov 26 at 15:42








                  • 19




                    From an information theoretic viewpoint, the most efficient base to compute in would be "e", but since that's not an integer, 3 would be the closest integer base.
                    – Tangurena
                    Nov 26 at 16:40






                  • 11




                    Also keep in mind that Setun was more efficient than binary computers largely because of its design - it came during a major transitional period where semiconductor diodes were just becoming available but transistors had not yet properly matured. They built Setun with diodes and magnetic cores (a system amenable to a three-state implementation) and this would be competing with vacuum tube based computers of the time. With transistor based electronics introduced this gap slammed shut - dramatically. Computers today are about a trillion times more efficient - that's a tough record to beat.
                    – J...
                    Nov 26 at 18:22






                  • 3




                    @Tangurena I honestly can't tell whether you're joking or just being mathematically deep. Nice comment either way...
                    – leftaroundabout
                    Nov 27 at 13:14








                  26




                  26




                  L.Dutch - Although I answered differently I think the claim about trinary being energy saving is worth following up. Can you back this up with actual references and research? I'd be interested because I'm reluctant to accept it without being convinced. In particular I wonder if the cost of producing the trinary technology would offset the minor savings of using it.
                  – chasly from UK
                  Nov 26 at 14:00






                  L.Dutch - Although I answered differently I think the claim about trinary being energy saving is worth following up. Can you back this up with actual references and research? I'd be interested because I'm reluctant to accept it without being convinced. In particular I wonder if the cost of producing the trinary technology would offset the minor savings of using it.
                  – chasly from UK
                  Nov 26 at 14:00






                  7




                  7




                  It needed more memory when memory was expensive and limited. It demands more advanced components(3 states). It takes more time and knowledge to build them. And after binary had so much behind it is just too wasteful. There is no point to be better if you are too demanding and late.
                  – Artemijs Danilovs
                  Nov 26 at 15:42






                  It needed more memory when memory was expensive and limited. It demands more advanced components(3 states). It takes more time and knowledge to build them. And after binary had so much behind it is just too wasteful. There is no point to be better if you are too demanding and late.
                  – Artemijs Danilovs
                  Nov 26 at 15:42






                  19




                  19




                  From an information theoretic viewpoint, the most efficient base to compute in would be "e", but since that's not an integer, 3 would be the closest integer base.
                  – Tangurena
                  Nov 26 at 16:40




                  From an information theoretic viewpoint, the most efficient base to compute in would be "e", but since that's not an integer, 3 would be the closest integer base.
                  – Tangurena
                  Nov 26 at 16:40




                  11




                  11




                  Also keep in mind that Setun was more efficient than binary computers largely because of its design - it came during a major transitional period where semiconductor diodes were just becoming available but transistors had not yet properly matured. They built Setun with diodes and magnetic cores (a system amenable to a three-state implementation) and this would be competing with vacuum tube based computers of the time. With transistor based electronics introduced this gap slammed shut - dramatically. Computers today are about a trillion times more efficient - that's a tough record to beat.
                  – J...
                  Nov 26 at 18:22




                  Also keep in mind that Setun was more efficient than binary computers largely because of its design - it came during a major transitional period where semiconductor diodes were just becoming available but transistors had not yet properly matured. They built Setun with diodes and magnetic cores (a system amenable to a three-state implementation) and this would be competing with vacuum tube based computers of the time. With transistor based electronics introduced this gap slammed shut - dramatically. Computers today are about a trillion times more efficient - that's a tough record to beat.
                  – J...
                  Nov 26 at 18:22




                  3




                  3




                  @Tangurena I honestly can't tell whether you're joking or just being mathematically deep. Nice comment either way...
                  – leftaroundabout
                  Nov 27 at 13:14




                  @Tangurena I honestly can't tell whether you're joking or just being mathematically deep. Nice comment either way...
                  – leftaroundabout
                  Nov 27 at 13:14










                  up vote
                  34
                  down vote













                  Instead of avoiding it, transcend binary:



                  Either let the evolution of technology take its course and somehow create a demand for non-binary processors. Analogous to what is happening now in the crypto currency scene: The developers of IOTA based their project on a ternary architecture model and are even working on a ternary processor (JINN).



                  Or let aggressive patenting and licensing in the early stages of binary processors (e.g. a general patent for binary processors due to lobbying or misjudgements in the patent office) be the cause for starting work on non-binary processors with less restrictive and more collaborative patents.



                  Patentability requirements are: novelty, usefulness, and non-obviousness1.




                  [the] nonobviousness principle asks whether the invention is an
                  adequate distance beyond or above the state of the art2




                  So this could be used to have a patent granted on binary processors. And even if it was an illegitimate patent, that would be revoked in future lawsuits, this situation could give rise to non-binary processors.






                  share|improve this answer



















                  • 23




                    You should focus on that second point and expand it more, that sounds interesting.
                    – kingledion
                    Nov 26 at 14:41










                  • Free/open hardware doesn't get monetized very well.
                    – RonJohn
                    Nov 26 at 14:42










                  • @RonJohn That's right. I'll update the answer. Maybe less restrictive patenting/licensing.
                    – mike
                    Nov 26 at 14:45






                  • 1




                    Advanced quantum computers could be a good choice for option one.
                    – Vaelus
                    Nov 26 at 15:54






                  • 2




                    @JohnDvorak The basis may be binary, but the superpositions are not. While we measure the results of quantum computation as binary numbers, the actual computations are not themselves binary.
                    – Vaelus
                    Nov 26 at 16:37

















                  up vote
                  34
                  down vote













                  Instead of avoiding it, transcend binary:



                  Either let the evolution of technology take its course and somehow create a demand for non-binary processors. Analogous to what is happening now in the crypto currency scene: The developers of IOTA based their project on a ternary architecture model and are even working on a ternary processor (JINN).



                  Or let aggressive patenting and licensing in the early stages of binary processors (e.g. a general patent for binary processors due to lobbying or misjudgements in the patent office) be the cause for starting work on non-binary processors with less restrictive and more collaborative patents.



                  Patentability requirements are: novelty, usefulness, and non-obviousness1.




                  [the] nonobviousness principle asks whether the invention is an
                  adequate distance beyond or above the state of the art2




                  So this could be used to have a patent granted on binary processors. And even if it was an illegitimate patent, that would be revoked in future lawsuits, this situation could give rise to non-binary processors.






                  share|improve this answer



















                  • 23




                    You should focus on that second point and expand it more, that sounds interesting.
                    – kingledion
                    Nov 26 at 14:41










                  • Free/open hardware doesn't get monetized very well.
                    – RonJohn
                    Nov 26 at 14:42










                  • @RonJohn That's right. I'll update the answer. Maybe less restrictive patenting/licensing.
                    – mike
                    Nov 26 at 14:45






                  • 1




                    Advanced quantum computers could be a good choice for option one.
                    – Vaelus
                    Nov 26 at 15:54






                  • 2




                    @JohnDvorak The basis may be binary, but the superpositions are not. While we measure the results of quantum computation as binary numbers, the actual computations are not themselves binary.
                    – Vaelus
                    Nov 26 at 16:37















                  up vote
                  34
                  down vote










                  up vote
                  34
                  down vote









                  Instead of avoiding it, transcend binary:



                  Either let the evolution of technology take its course and somehow create a demand for non-binary processors. Analogous to what is happening now in the crypto currency scene: The developers of IOTA based their project on a ternary architecture model and are even working on a ternary processor (JINN).



                  Or let aggressive patenting and licensing in the early stages of binary processors (e.g. a general patent for binary processors due to lobbying or misjudgements in the patent office) be the cause for starting work on non-binary processors with less restrictive and more collaborative patents.



                  Patentability requirements are: novelty, usefulness, and non-obviousness1.




                  [the] nonobviousness principle asks whether the invention is an
                  adequate distance beyond or above the state of the art2




                  So this could be used to have a patent granted on binary processors. And even if it was an illegitimate patent, that would be revoked in future lawsuits, this situation could give rise to non-binary processors.






                  share|improve this answer














                  Instead of avoiding it, transcend binary:



                  Either let the evolution of technology take its course and somehow create a demand for non-binary processors. Analogous to what is happening now in the crypto currency scene: The developers of IOTA based their project on a ternary architecture model and are even working on a ternary processor (JINN).



                  Or let aggressive patenting and licensing in the early stages of binary processors (e.g. a general patent for binary processors due to lobbying or misjudgements in the patent office) be the cause for starting work on non-binary processors with less restrictive and more collaborative patents.



                  Patentability requirements are: novelty, usefulness, and non-obviousness1.




                  [the] nonobviousness principle asks whether the invention is an
                  adequate distance beyond or above the state of the art2




                  So this could be used to have a patent granted on binary processors. And even if it was an illegitimate patent, that would be revoked in future lawsuits, this situation could give rise to non-binary processors.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Nov 27 at 13:51

























                  answered Nov 26 at 14:40









                  mike

                  44816




                  44816








                  • 23




                    You should focus on that second point and expand it more, that sounds interesting.
                    – kingledion
                    Nov 26 at 14:41










                  • Free/open hardware doesn't get monetized very well.
                    – RonJohn
                    Nov 26 at 14:42










                  • @RonJohn That's right. I'll update the answer. Maybe less restrictive patenting/licensing.
                    – mike
                    Nov 26 at 14:45






                  • 1




                    Advanced quantum computers could be a good choice for option one.
                    – Vaelus
                    Nov 26 at 15:54






                  • 2




                    @JohnDvorak The basis may be binary, but the superpositions are not. While we measure the results of quantum computation as binary numbers, the actual computations are not themselves binary.
                    – Vaelus
                    Nov 26 at 16:37
















                  • 23




                    You should focus on that second point and expand it more, that sounds interesting.
                    – kingledion
                    Nov 26 at 14:41










                  • Free/open hardware doesn't get monetized very well.
                    – RonJohn
                    Nov 26 at 14:42










                  • @RonJohn That's right. I'll update the answer. Maybe less restrictive patenting/licensing.
                    – mike
                    Nov 26 at 14:45






                  • 1




                    Advanced quantum computers could be a good choice for option one.
                    – Vaelus
                    Nov 26 at 15:54






                  • 2




                    @JohnDvorak The basis may be binary, but the superpositions are not. While we measure the results of quantum computation as binary numbers, the actual computations are not themselves binary.
                    – Vaelus
                    Nov 26 at 16:37










                  23




                  23




                  You should focus on that second point and expand it more, that sounds interesting.
                  – kingledion
                  Nov 26 at 14:41




                  You should focus on that second point and expand it more, that sounds interesting.
                  – kingledion
                  Nov 26 at 14:41












                  Free/open hardware doesn't get monetized very well.
                  – RonJohn
                  Nov 26 at 14:42




                  Free/open hardware doesn't get monetized very well.
                  – RonJohn
                  Nov 26 at 14:42












                  @RonJohn That's right. I'll update the answer. Maybe less restrictive patenting/licensing.
                  – mike
                  Nov 26 at 14:45




                  @RonJohn That's right. I'll update the answer. Maybe less restrictive patenting/licensing.
                  – mike
                  Nov 26 at 14:45




                  1




                  1




                  Advanced quantum computers could be a good choice for option one.
                  – Vaelus
                  Nov 26 at 15:54




                  Advanced quantum computers could be a good choice for option one.
                  – Vaelus
                  Nov 26 at 15:54




                  2




                  2




                  @JohnDvorak The basis may be binary, but the superpositions are not. While we measure the results of quantum computation as binary numbers, the actual computations are not themselves binary.
                  – Vaelus
                  Nov 26 at 16:37






                  @JohnDvorak The basis may be binary, but the superpositions are not. While we measure the results of quantum computation as binary numbers, the actual computations are not themselves binary.
                  – Vaelus
                  Nov 26 at 16:37












                  up vote
                  29
                  down vote













                  I would like to advance the idea of an analog computer.



                  Analog computers are something like the holy grail of electronics. They have the potential of nearly infinite more computing power, limited only by the voltgae or current measuring discriminator (i.e., the precision of measuring an electric state or condition).



                  The reason we don't have them is because using transistors in their switching mode is simple. Simple, simple, simple. So simple, that defaulting everything to the lowest common denominator (binary, single-variable logic) was obvious.



                  But even today, change is coming.




                  Analog computing, which was the predominant form of high-performance computing well into the 1970s, has largely been forgotten since today's stored program digital computers took over. But the time is ripe to change this. (Source)




                   




                  If analog and hybrid computers were so valuable half a century ago, why did they disappear, leaving almost no trace? The reasons had to do with the limitations of 1970s technology: Essentially, they were too hard to design, build, operate, and maintain. But analog computers and digital-analog hybrids built with today’s technology wouldn’t suffer the same shortcomings, which is why significant work is now going on in analog computing in the context of machine learning, machine intelligence, and biomimetic circuits.



                  ...



                  They were complex, quirky machines, requiring specially trained personnel to understand and run them—a fact that played a role in their demise.



                  Another factor in their downfall was that by the 1960s digital computers were making large strides, thanks to their many advantages: straightforward programmability, algorithmic operation, ease of storage, high precision, and an ability to handle problems of any size, given enough time. (Source)




                  But, how to get there without getting hung up on the digital world?




                  • A breakthrough in discrimination. Transistors, for all their value, are only as good as their manufacturing process. The more precisely constructed the transistor, the more precise the voltage measurement can be. The more precise the voltage measurement, the greater the programatic value of a change in voltage = faster computing and (best of all for most space applications) faster reaction to the environment.


                  • Breakthrough in modeling equations. Digital computers are, by comparison, trivial to program (hence, BASIC). Their inefficiency is irrelevant compared to their ease of use. However, this is because double-integration is a whomping difficult thing to do on paper, much less to describe such that a machine can process it. But, what if we could have languages like Wolfram, R, or Haskell without having to go through the digital revolution of BASIC, PASCAL, FORTRAN, and C first? Our view of programming is very much based on how we perceive (or are influenced by) the nature of computation. Had someone come up with an efficient and flexible mathematical language before the discovery of switching transistors... the world would have changed forever.



                  Would this entirely remove digital from the picture?



                  Heck, no. That's like saying the development of a practical Lamborghini (if the word practical can ever be applied to a Lamborghini) before, say, the Edsel would mean we would have never seen the Datsun B210. The single biggest weakness of analog computing is the human-to-machine interface. The ability to compute in real time rather than through a series of discrete often barely related steps is how our brains work — but that doesn't translate well to telling a machine how to do its job. The odds are good that a hybrid machine (digital interface to an analog core) would be the final solution (as it will today). Is this germain to your question? Not particularly.



                  Conclusion



                  Two breakthroughs: one in transistor manufacture and the other in symbolic programming, are all that would be needed to advance analog computation with all of its limitless computational power over digital computing.






                  share|improve this answer





















                  • It's happening, although slowly: scientificamerican.com/article/…
                    – Jan Dorniak
                    Nov 26 at 20:39






                  • 8




                    If Neural Networks had been better developed before digital surpassed analog, perhaps the energy savings of analog neural networks would prevent binary's triumph. This change might have happened if only Marvin Minsky had discovered the potential of backpropagation in his book "Perceptrons", rather than focusing on neural network's limitations.
                    – AShelly
                    Nov 26 at 23:40






                  • 2




                    The source seems pretty biased. The largest analog computer setup I'm aware of was the Dutch "Deltar" simulation of the national flood barrier system. While it as used in the 70's, it was already outdated at the time. Its design dated back to the 40's, and it was built in the 60's. And very importantly, it was not general-purpose at all. It wasn't even domain-general; it simulated the Dutch water system and nothing else.
                    – MSalters
                    Nov 27 at 15:35










                  • The Lamborghini 2.86 DT: lectura-specs.com/en/model/agricultural-machinery/… seems quite practical to me, but I do get your point.
                    – BentNielsen
                    Nov 29 at 0:43















                  up vote
                  29
                  down vote













                  I would like to advance the idea of an analog computer.



                  Analog computers are something like the holy grail of electronics. They have the potential of nearly infinite more computing power, limited only by the voltgae or current measuring discriminator (i.e., the precision of measuring an electric state or condition).



                  The reason we don't have them is because using transistors in their switching mode is simple. Simple, simple, simple. So simple, that defaulting everything to the lowest common denominator (binary, single-variable logic) was obvious.



                  But even today, change is coming.




                  Analog computing, which was the predominant form of high-performance computing well into the 1970s, has largely been forgotten since today's stored program digital computers took over. But the time is ripe to change this. (Source)




                   




                  If analog and hybrid computers were so valuable half a century ago, why did they disappear, leaving almost no trace? The reasons had to do with the limitations of 1970s technology: Essentially, they were too hard to design, build, operate, and maintain. But analog computers and digital-analog hybrids built with today’s technology wouldn’t suffer the same shortcomings, which is why significant work is now going on in analog computing in the context of machine learning, machine intelligence, and biomimetic circuits.



                  ...



                  They were complex, quirky machines, requiring specially trained personnel to understand and run them—a fact that played a role in their demise.



                  Another factor in their downfall was that by the 1960s digital computers were making large strides, thanks to their many advantages: straightforward programmability, algorithmic operation, ease of storage, high precision, and an ability to handle problems of any size, given enough time. (Source)




                  But, how to get there without getting hung up on the digital world?




                  • A breakthrough in discrimination. Transistors, for all their value, are only as good as their manufacturing process. The more precisely constructed the transistor, the more precise the voltage measurement can be. The more precise the voltage measurement, the greater the programatic value of a change in voltage = faster computing and (best of all for most space applications) faster reaction to the environment.


                  • Breakthrough in modeling equations. Digital computers are, by comparison, trivial to program (hence, BASIC). Their inefficiency is irrelevant compared to their ease of use. However, this is because double-integration is a whomping difficult thing to do on paper, much less to describe such that a machine can process it. But, what if we could have languages like Wolfram, R, or Haskell without having to go through the digital revolution of BASIC, PASCAL, FORTRAN, and C first? Our view of programming is very much based on how we perceive (or are influenced by) the nature of computation. Had someone come up with an efficient and flexible mathematical language before the discovery of switching transistors... the world would have changed forever.



                  Would this entirely remove digital from the picture?



                  Heck, no. That's like saying the development of a practical Lamborghini (if the word practical can ever be applied to a Lamborghini) before, say, the Edsel would mean we would have never seen the Datsun B210. The single biggest weakness of analog computing is the human-to-machine interface. The ability to compute in real time rather than through a series of discrete often barely related steps is how our brains work — but that doesn't translate well to telling a machine how to do its job. The odds are good that a hybrid machine (digital interface to an analog core) would be the final solution (as it will today). Is this germain to your question? Not particularly.



                  Conclusion



                  Two breakthroughs: one in transistor manufacture and the other in symbolic programming, are all that would be needed to advance analog computation with all of its limitless computational power over digital computing.






                  share|improve this answer





















                  • It's happening, although slowly: scientificamerican.com/article/…
                    – Jan Dorniak
                    Nov 26 at 20:39






                  • 8




                    If Neural Networks had been better developed before digital surpassed analog, perhaps the energy savings of analog neural networks would prevent binary's triumph. This change might have happened if only Marvin Minsky had discovered the potential of backpropagation in his book "Perceptrons", rather than focusing on neural network's limitations.
                    – AShelly
                    Nov 26 at 23:40






                  • 2




                    The source seems pretty biased. The largest analog computer setup I'm aware of was the Dutch "Deltar" simulation of the national flood barrier system. While it as used in the 70's, it was already outdated at the time. Its design dated back to the 40's, and it was built in the 60's. And very importantly, it was not general-purpose at all. It wasn't even domain-general; it simulated the Dutch water system and nothing else.
                    – MSalters
                    Nov 27 at 15:35










                  • The Lamborghini 2.86 DT: lectura-specs.com/en/model/agricultural-machinery/… seems quite practical to me, but I do get your point.
                    – BentNielsen
                    Nov 29 at 0:43













                  up vote
                  29
                  down vote










                  up vote
                  29
                  down vote









                  I would like to advance the idea of an analog computer.



                  Analog computers are something like the holy grail of electronics. They have the potential of nearly infinite more computing power, limited only by the voltgae or current measuring discriminator (i.e., the precision of measuring an electric state or condition).



                  The reason we don't have them is because using transistors in their switching mode is simple. Simple, simple, simple. So simple, that defaulting everything to the lowest common denominator (binary, single-variable logic) was obvious.



                  But even today, change is coming.




                  Analog computing, which was the predominant form of high-performance computing well into the 1970s, has largely been forgotten since today's stored program digital computers took over. But the time is ripe to change this. (Source)




                   




                  If analog and hybrid computers were so valuable half a century ago, why did they disappear, leaving almost no trace? The reasons had to do with the limitations of 1970s technology: Essentially, they were too hard to design, build, operate, and maintain. But analog computers and digital-analog hybrids built with today’s technology wouldn’t suffer the same shortcomings, which is why significant work is now going on in analog computing in the context of machine learning, machine intelligence, and biomimetic circuits.



                  ...



                  They were complex, quirky machines, requiring specially trained personnel to understand and run them—a fact that played a role in their demise.



                  Another factor in their downfall was that by the 1960s digital computers were making large strides, thanks to their many advantages: straightforward programmability, algorithmic operation, ease of storage, high precision, and an ability to handle problems of any size, given enough time. (Source)




                  But, how to get there without getting hung up on the digital world?




                  • A breakthrough in discrimination. Transistors, for all their value, are only as good as their manufacturing process. The more precisely constructed the transistor, the more precise the voltage measurement can be. The more precise the voltage measurement, the greater the programatic value of a change in voltage = faster computing and (best of all for most space applications) faster reaction to the environment.


                  • Breakthrough in modeling equations. Digital computers are, by comparison, trivial to program (hence, BASIC). Their inefficiency is irrelevant compared to their ease of use. However, this is because double-integration is a whomping difficult thing to do on paper, much less to describe such that a machine can process it. But, what if we could have languages like Wolfram, R, or Haskell without having to go through the digital revolution of BASIC, PASCAL, FORTRAN, and C first? Our view of programming is very much based on how we perceive (or are influenced by) the nature of computation. Had someone come up with an efficient and flexible mathematical language before the discovery of switching transistors... the world would have changed forever.



                  Would this entirely remove digital from the picture?



                  Heck, no. That's like saying the development of a practical Lamborghini (if the word practical can ever be applied to a Lamborghini) before, say, the Edsel would mean we would have never seen the Datsun B210. The single biggest weakness of analog computing is the human-to-machine interface. The ability to compute in real time rather than through a series of discrete often barely related steps is how our brains work — but that doesn't translate well to telling a machine how to do its job. The odds are good that a hybrid machine (digital interface to an analog core) would be the final solution (as it will today). Is this germain to your question? Not particularly.



                  Conclusion



                  Two breakthroughs: one in transistor manufacture and the other in symbolic programming, are all that would be needed to advance analog computation with all of its limitless computational power over digital computing.






                  share|improve this answer












                  I would like to advance the idea of an analog computer.



                  Analog computers are something like the holy grail of electronics. They have the potential of nearly infinite more computing power, limited only by the voltgae or current measuring discriminator (i.e., the precision of measuring an electric state or condition).



                  The reason we don't have them is because using transistors in their switching mode is simple. Simple, simple, simple. So simple, that defaulting everything to the lowest common denominator (binary, single-variable logic) was obvious.



                  But even today, change is coming.




                  Analog computing, which was the predominant form of high-performance computing well into the 1970s, has largely been forgotten since today's stored program digital computers took over. But the time is ripe to change this. (Source)




                   




                  If analog and hybrid computers were so valuable half a century ago, why did they disappear, leaving almost no trace? The reasons had to do with the limitations of 1970s technology: Essentially, they were too hard to design, build, operate, and maintain. But analog computers and digital-analog hybrids built with today’s technology wouldn’t suffer the same shortcomings, which is why significant work is now going on in analog computing in the context of machine learning, machine intelligence, and biomimetic circuits.



                  ...



                  They were complex, quirky machines, requiring specially trained personnel to understand and run them—a fact that played a role in their demise.



                  Another factor in their downfall was that by the 1960s digital computers were making large strides, thanks to their many advantages: straightforward programmability, algorithmic operation, ease of storage, high precision, and an ability to handle problems of any size, given enough time. (Source)




                  But, how to get there without getting hung up on the digital world?




                  • A breakthrough in discrimination. Transistors, for all their value, are only as good as their manufacturing process. The more precisely constructed the transistor, the more precise the voltage measurement can be. The more precise the voltage measurement, the greater the programatic value of a change in voltage = faster computing and (best of all for most space applications) faster reaction to the environment.


                  • Breakthrough in modeling equations. Digital computers are, by comparison, trivial to program (hence, BASIC). Their inefficiency is irrelevant compared to their ease of use. However, this is because double-integration is a whomping difficult thing to do on paper, much less to describe such that a machine can process it. But, what if we could have languages like Wolfram, R, or Haskell without having to go through the digital revolution of BASIC, PASCAL, FORTRAN, and C first? Our view of programming is very much based on how we perceive (or are influenced by) the nature of computation. Had someone come up with an efficient and flexible mathematical language before the discovery of switching transistors... the world would have changed forever.



                  Would this entirely remove digital from the picture?



                  Heck, no. That's like saying the development of a practical Lamborghini (if the word practical can ever be applied to a Lamborghini) before, say, the Edsel would mean we would have never seen the Datsun B210. The single biggest weakness of analog computing is the human-to-machine interface. The ability to compute in real time rather than through a series of discrete often barely related steps is how our brains work — but that doesn't translate well to telling a machine how to do its job. The odds are good that a hybrid machine (digital interface to an analog core) would be the final solution (as it will today). Is this germain to your question? Not particularly.



                  Conclusion



                  Two breakthroughs: one in transistor manufacture and the other in symbolic programming, are all that would be needed to advance analog computation with all of its limitless computational power over digital computing.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 26 at 16:15









                  JBH

                  37.7k584181




                  37.7k584181












                  • It's happening, although slowly: scientificamerican.com/article/…
                    – Jan Dorniak
                    Nov 26 at 20:39






                  • 8




                    If Neural Networks had been better developed before digital surpassed analog, perhaps the energy savings of analog neural networks would prevent binary's triumph. This change might have happened if only Marvin Minsky had discovered the potential of backpropagation in his book "Perceptrons", rather than focusing on neural network's limitations.
                    – AShelly
                    Nov 26 at 23:40






                  • 2




                    The source seems pretty biased. The largest analog computer setup I'm aware of was the Dutch "Deltar" simulation of the national flood barrier system. While it as used in the 70's, it was already outdated at the time. Its design dated back to the 40's, and it was built in the 60's. And very importantly, it was not general-purpose at all. It wasn't even domain-general; it simulated the Dutch water system and nothing else.
                    – MSalters
                    Nov 27 at 15:35










                  • The Lamborghini 2.86 DT: lectura-specs.com/en/model/agricultural-machinery/… seems quite practical to me, but I do get your point.
                    – BentNielsen
                    Nov 29 at 0:43


















                  • It's happening, although slowly: scientificamerican.com/article/…
                    – Jan Dorniak
                    Nov 26 at 20:39






                  • 8




                    If Neural Networks had been better developed before digital surpassed analog, perhaps the energy savings of analog neural networks would prevent binary's triumph. This change might have happened if only Marvin Minsky had discovered the potential of backpropagation in his book "Perceptrons", rather than focusing on neural network's limitations.
                    – AShelly
                    Nov 26 at 23:40






                  • 2




                    The source seems pretty biased. The largest analog computer setup I'm aware of was the Dutch "Deltar" simulation of the national flood barrier system. While it as used in the 70's, it was already outdated at the time. Its design dated back to the 40's, and it was built in the 60's. And very importantly, it was not general-purpose at all. It wasn't even domain-general; it simulated the Dutch water system and nothing else.
                    – MSalters
                    Nov 27 at 15:35










                  • The Lamborghini 2.86 DT: lectura-specs.com/en/model/agricultural-machinery/… seems quite practical to me, but I do get your point.
                    – BentNielsen
                    Nov 29 at 0:43
















                  It's happening, although slowly: scientificamerican.com/article/…
                  – Jan Dorniak
                  Nov 26 at 20:39




                  It's happening, although slowly: scientificamerican.com/article/…
                  – Jan Dorniak
                  Nov 26 at 20:39




                  8




                  8




                  If Neural Networks had been better developed before digital surpassed analog, perhaps the energy savings of analog neural networks would prevent binary's triumph. This change might have happened if only Marvin Minsky had discovered the potential of backpropagation in his book "Perceptrons", rather than focusing on neural network's limitations.
                  – AShelly
                  Nov 26 at 23:40




                  If Neural Networks had been better developed before digital surpassed analog, perhaps the energy savings of analog neural networks would prevent binary's triumph. This change might have happened if only Marvin Minsky had discovered the potential of backpropagation in his book "Perceptrons", rather than focusing on neural network's limitations.
                  – AShelly
                  Nov 26 at 23:40




                  2




                  2




                  The source seems pretty biased. The largest analog computer setup I'm aware of was the Dutch "Deltar" simulation of the national flood barrier system. While it as used in the 70's, it was already outdated at the time. Its design dated back to the 40's, and it was built in the 60's. And very importantly, it was not general-purpose at all. It wasn't even domain-general; it simulated the Dutch water system and nothing else.
                  – MSalters
                  Nov 27 at 15:35




                  The source seems pretty biased. The largest analog computer setup I'm aware of was the Dutch "Deltar" simulation of the national flood barrier system. While it as used in the 70's, it was already outdated at the time. Its design dated back to the 40's, and it was built in the 60's. And very importantly, it was not general-purpose at all. It wasn't even domain-general; it simulated the Dutch water system and nothing else.
                  – MSalters
                  Nov 27 at 15:35












                  The Lamborghini 2.86 DT: lectura-specs.com/en/model/agricultural-machinery/… seems quite practical to me, but I do get your point.
                  – BentNielsen
                  Nov 29 at 0:43




                  The Lamborghini 2.86 DT: lectura-specs.com/en/model/agricultural-machinery/… seems quite practical to me, but I do get your point.
                  – BentNielsen
                  Nov 29 at 0:43










                  up vote
                  14
                  down vote













                  Having thought about this and looked at L.Dutch's answer, I may withdraw my original answer (or leave it just for interest).



                  Instead I will give a political answer.



                  As mentioned by L.Dutch, the Soviets came up with a ternary system (see below). Because of the limited use of the Russian language throughout the world the Soviets often resented the fact that US scientific papers got more credence - after all English is the Lingua Franca of science. (This is true by the way, not a fiction, I'll look for references).



                  Suppose the Russians had won a war over the West. It was common in Soviet Russia for science to be heavily politicised (again I'll look for references). Therefore, regardless of the validity of a non-binary system the Russians could have mandated ternary or some other base simply as a form of triumphalism.



                  Note - I'm chickening out of finding references at the moment. I've found some but they involve delving into Marxist doctrine or buying an expensive book. My personal knowledge of the situation came from talking to a British scientist who was digging through old Russian papers looking for bits that had been missed or had been distorted by doctrine. Maybe I'll delve further but not right now.






                  The first modern, electronic ternary computer Setun was built in 1958
                  in the Soviet Union at the Moscow State University by Nikolay
                  Brusentsov



                  https://en.wikipedia.org/wiki/Ternary_computer







                  share|improve this answer























                  • This would hardly be a minimal change.
                    – mike
                    Nov 26 at 14:56






                  • 3




                    @mike - It's not a small change but that doesn't exclude it being a minimal one, unless you can think of a smaller political change, in which case go ahead.
                    – chasly from UK
                    Nov 26 at 14:58








                  • 3




                    I agree that in can be minimal in a political solution space. I hereby withdraw my comment :D
                    – mike
                    Nov 26 at 15:12










                  • A more minimal change could be ternary computing becoming widespread in the Eastern bloc (and perhaps China but that would involve changing up the Sino-Soviet split causing ripple effects). Later on, the transition to a freer economy (either by reform or collapse of the USSR) could lead to ternary computers being widespread without something as drastic as WWIII.
                    – 0something0
                    Nov 30 at 6:03

















                  up vote
                  14
                  down vote













                  Having thought about this and looked at L.Dutch's answer, I may withdraw my original answer (or leave it just for interest).



                  Instead I will give a political answer.



                  As mentioned by L.Dutch, the Soviets came up with a ternary system (see below). Because of the limited use of the Russian language throughout the world the Soviets often resented the fact that US scientific papers got more credence - after all English is the Lingua Franca of science. (This is true by the way, not a fiction, I'll look for references).



                  Suppose the Russians had won a war over the West. It was common in Soviet Russia for science to be heavily politicised (again I'll look for references). Therefore, regardless of the validity of a non-binary system the Russians could have mandated ternary or some other base simply as a form of triumphalism.



                  Note - I'm chickening out of finding references at the moment. I've found some but they involve delving into Marxist doctrine or buying an expensive book. My personal knowledge of the situation came from talking to a British scientist who was digging through old Russian papers looking for bits that had been missed or had been distorted by doctrine. Maybe I'll delve further but not right now.






                  The first modern, electronic ternary computer Setun was built in 1958
                  in the Soviet Union at the Moscow State University by Nikolay
                  Brusentsov



                  https://en.wikipedia.org/wiki/Ternary_computer







                  share|improve this answer























                  • This would hardly be a minimal change.
                    – mike
                    Nov 26 at 14:56






                  • 3




                    @mike - It's not a small change but that doesn't exclude it being a minimal one, unless you can think of a smaller political change, in which case go ahead.
                    – chasly from UK
                    Nov 26 at 14:58








                  • 3




                    I agree that in can be minimal in a political solution space. I hereby withdraw my comment :D
                    – mike
                    Nov 26 at 15:12










                  • A more minimal change could be ternary computing becoming widespread in the Eastern bloc (and perhaps China but that would involve changing up the Sino-Soviet split causing ripple effects). Later on, the transition to a freer economy (either by reform or collapse of the USSR) could lead to ternary computers being widespread without something as drastic as WWIII.
                    – 0something0
                    Nov 30 at 6:03















                  up vote
                  14
                  down vote










                  up vote
                  14
                  down vote









                  Having thought about this and looked at L.Dutch's answer, I may withdraw my original answer (or leave it just for interest).



                  Instead I will give a political answer.



                  As mentioned by L.Dutch, the Soviets came up with a ternary system (see below). Because of the limited use of the Russian language throughout the world the Soviets often resented the fact that US scientific papers got more credence - after all English is the Lingua Franca of science. (This is true by the way, not a fiction, I'll look for references).



                  Suppose the Russians had won a war over the West. It was common in Soviet Russia for science to be heavily politicised (again I'll look for references). Therefore, regardless of the validity of a non-binary system the Russians could have mandated ternary or some other base simply as a form of triumphalism.



                  Note - I'm chickening out of finding references at the moment. I've found some but they involve delving into Marxist doctrine or buying an expensive book. My personal knowledge of the situation came from talking to a British scientist who was digging through old Russian papers looking for bits that had been missed or had been distorted by doctrine. Maybe I'll delve further but not right now.






                  The first modern, electronic ternary computer Setun was built in 1958
                  in the Soviet Union at the Moscow State University by Nikolay
                  Brusentsov



                  https://en.wikipedia.org/wiki/Ternary_computer







                  share|improve this answer














                  Having thought about this and looked at L.Dutch's answer, I may withdraw my original answer (or leave it just for interest).



                  Instead I will give a political answer.



                  As mentioned by L.Dutch, the Soviets came up with a ternary system (see below). Because of the limited use of the Russian language throughout the world the Soviets often resented the fact that US scientific papers got more credence - after all English is the Lingua Franca of science. (This is true by the way, not a fiction, I'll look for references).



                  Suppose the Russians had won a war over the West. It was common in Soviet Russia for science to be heavily politicised (again I'll look for references). Therefore, regardless of the validity of a non-binary system the Russians could have mandated ternary or some other base simply as a form of triumphalism.



                  Note - I'm chickening out of finding references at the moment. I've found some but they involve delving into Marxist doctrine or buying an expensive book. My personal knowledge of the situation came from talking to a British scientist who was digging through old Russian papers looking for bits that had been missed or had been distorted by doctrine. Maybe I'll delve further but not right now.






                  The first modern, electronic ternary computer Setun was built in 1958
                  in the Soviet Union at the Moscow State University by Nikolay
                  Brusentsov



                  https://en.wikipedia.org/wiki/Ternary_computer








                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Nov 26 at 16:29

























                  answered Nov 26 at 14:53









                  chasly from UK

                  10.7k348103




                  10.7k348103












                  • This would hardly be a minimal change.
                    – mike
                    Nov 26 at 14:56






                  • 3




                    @mike - It's not a small change but that doesn't exclude it being a minimal one, unless you can think of a smaller political change, in which case go ahead.
                    – chasly from UK
                    Nov 26 at 14:58








                  • 3




                    I agree that in can be minimal in a political solution space. I hereby withdraw my comment :D
                    – mike
                    Nov 26 at 15:12










                  • A more minimal change could be ternary computing becoming widespread in the Eastern bloc (and perhaps China but that would involve changing up the Sino-Soviet split causing ripple effects). Later on, the transition to a freer economy (either by reform or collapse of the USSR) could lead to ternary computers being widespread without something as drastic as WWIII.
                    – 0something0
                    Nov 30 at 6:03




















                  • This would hardly be a minimal change.
                    – mike
                    Nov 26 at 14:56






                  • 3




                    @mike - It's not a small change but that doesn't exclude it being a minimal one, unless you can think of a smaller political change, in which case go ahead.
                    – chasly from UK
                    Nov 26 at 14:58








                  • 3




                    I agree that in can be minimal in a political solution space. I hereby withdraw my comment :D
                    – mike
                    Nov 26 at 15:12










                  • A more minimal change could be ternary computing becoming widespread in the Eastern bloc (and perhaps China but that would involve changing up the Sino-Soviet split causing ripple effects). Later on, the transition to a freer economy (either by reform or collapse of the USSR) could lead to ternary computers being widespread without something as drastic as WWIII.
                    – 0something0
                    Nov 30 at 6:03


















                  This would hardly be a minimal change.
                  – mike
                  Nov 26 at 14:56




                  This would hardly be a minimal change.
                  – mike
                  Nov 26 at 14:56




                  3




                  3




                  @mike - It's not a small change but that doesn't exclude it being a minimal one, unless you can think of a smaller political change, in which case go ahead.
                  – chasly from UK
                  Nov 26 at 14:58






                  @mike - It's not a small change but that doesn't exclude it being a minimal one, unless you can think of a smaller political change, in which case go ahead.
                  – chasly from UK
                  Nov 26 at 14:58






                  3




                  3




                  I agree that in can be minimal in a political solution space. I hereby withdraw my comment :D
                  – mike
                  Nov 26 at 15:12




                  I agree that in can be minimal in a political solution space. I hereby withdraw my comment :D
                  – mike
                  Nov 26 at 15:12












                  A more minimal change could be ternary computing becoming widespread in the Eastern bloc (and perhaps China but that would involve changing up the Sino-Soviet split causing ripple effects). Later on, the transition to a freer economy (either by reform or collapse of the USSR) could lead to ternary computers being widespread without something as drastic as WWIII.
                  – 0something0
                  Nov 30 at 6:03






                  A more minimal change could be ternary computing becoming widespread in the Eastern bloc (and perhaps China but that would involve changing up the Sino-Soviet split causing ripple effects). Later on, the transition to a freer economy (either by reform or collapse of the USSR) could lead to ternary computers being widespread without something as drastic as WWIII.
                  – 0something0
                  Nov 30 at 6:03












                  up vote
                  12
                  down vote













                  A ternary system would be preferred in a world where data storage cost exceeds all other cost considerations in computers. This preference would be due to radix economy, which essentially quantifies the relative cost of storing numbers in a particular numbering system. Euler's number e ≈ 2.718 has the lowest radix economy. Among integers, 3 has the lowest radix economy, lower than 2 and 4 (which have the same).



                  If the first storage medium used for computing would have stored ternary digits for less or just slightly more cost than binary digits, and if processing cost would have been insignificant compared to storage cost, ternary computing might have become the dominant standard. The advantage of ternary systems is small (around 5 percent), but could be important if storage cost was a serious consideration.



                  Binary computers dominate today mostly because electricity was the first effective medium to store and process numbers, and a single threshold voltage to distinguish between two states is easier to manage than two or more thresholds for three or more states.



                  Build your transistors in a medium that can store and process ternary digits efficiently, and emphasize on the high cost of storing. A mechanical example would be a switch that can take three positions in a triangle.






                  share|improve this answer





















                  • Very interesting!
                    – kingledion
                    Nov 27 at 15:11















                  up vote
                  12
                  down vote













                  A ternary system would be preferred in a world where data storage cost exceeds all other cost considerations in computers. This preference would be due to radix economy, which essentially quantifies the relative cost of storing numbers in a particular numbering system. Euler's number e ≈ 2.718 has the lowest radix economy. Among integers, 3 has the lowest radix economy, lower than 2 and 4 (which have the same).



                  If the first storage medium used for computing would have stored ternary digits for less or just slightly more cost than binary digits, and if processing cost would have been insignificant compared to storage cost, ternary computing might have become the dominant standard. The advantage of ternary systems is small (around 5 percent), but could be important if storage cost was a serious consideration.



                  Binary computers dominate today mostly because electricity was the first effective medium to store and process numbers, and a single threshold voltage to distinguish between two states is easier to manage than two or more thresholds for three or more states.



                  Build your transistors in a medium that can store and process ternary digits efficiently, and emphasize on the high cost of storing. A mechanical example would be a switch that can take three positions in a triangle.






                  share|improve this answer





















                  • Very interesting!
                    – kingledion
                    Nov 27 at 15:11













                  up vote
                  12
                  down vote










                  up vote
                  12
                  down vote









                  A ternary system would be preferred in a world where data storage cost exceeds all other cost considerations in computers. This preference would be due to radix economy, which essentially quantifies the relative cost of storing numbers in a particular numbering system. Euler's number e ≈ 2.718 has the lowest radix economy. Among integers, 3 has the lowest radix economy, lower than 2 and 4 (which have the same).



                  If the first storage medium used for computing would have stored ternary digits for less or just slightly more cost than binary digits, and if processing cost would have been insignificant compared to storage cost, ternary computing might have become the dominant standard. The advantage of ternary systems is small (around 5 percent), but could be important if storage cost was a serious consideration.



                  Binary computers dominate today mostly because electricity was the first effective medium to store and process numbers, and a single threshold voltage to distinguish between two states is easier to manage than two or more thresholds for three or more states.



                  Build your transistors in a medium that can store and process ternary digits efficiently, and emphasize on the high cost of storing. A mechanical example would be a switch that can take three positions in a triangle.






                  share|improve this answer












                  A ternary system would be preferred in a world where data storage cost exceeds all other cost considerations in computers. This preference would be due to radix economy, which essentially quantifies the relative cost of storing numbers in a particular numbering system. Euler's number e ≈ 2.718 has the lowest radix economy. Among integers, 3 has the lowest radix economy, lower than 2 and 4 (which have the same).



                  If the first storage medium used for computing would have stored ternary digits for less or just slightly more cost than binary digits, and if processing cost would have been insignificant compared to storage cost, ternary computing might have become the dominant standard. The advantage of ternary systems is small (around 5 percent), but could be important if storage cost was a serious consideration.



                  Binary computers dominate today mostly because electricity was the first effective medium to store and process numbers, and a single threshold voltage to distinguish between two states is easier to manage than two or more thresholds for three or more states.



                  Build your transistors in a medium that can store and process ternary digits efficiently, and emphasize on the high cost of storing. A mechanical example would be a switch that can take three positions in a triangle.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 27 at 11:49









                  pommy

                  2212




                  2212












                  • Very interesting!
                    – kingledion
                    Nov 27 at 15:11


















                  • Very interesting!
                    – kingledion
                    Nov 27 at 15:11
















                  Very interesting!
                  – kingledion
                  Nov 27 at 15:11




                  Very interesting!
                  – kingledion
                  Nov 27 at 15:11










                  up vote
                  10
                  down vote













                  Toolforger has one thing right: Binary computers are the most efficient computing devices possible. Period. Ternary has no technological advantage, whatsoever.



                  However, I'm going to give a suggestion of how you can offset the disadvantage of ternary computing, to allow your society to actually use ternary computers instead of binary ones:



                  Your society has evolved to use a balanced numeral system.



                  Balanced numeral systems don't just use positive digits like we do, they use an equal number of negative and positive digits. As such, balanced ternary uses three digits for -1, 0, and 1 instead of the unbalanced 0, 1, and 2. This has several beneficial consequences:




                  • Balanced numeral systems have symmetries that unbalanced systems lack. Not only can you exploit commutativity when doing calculations (you know what 2+3 is, so you know what 3+2 is), but also symmetries based on sign: -3-2 = -(3+2), -3*2 = 3*-2, -3*-2 = 3*2, and 3*-2 = -(3*2).


                  • You have more computations with trivial outcome: x+(-x) = 0 and -1*x = -x.



                  • The effect is, that you have much less to learn when learning balanced numeral systems. For instance, unbalanced decimal requires you to learn 81 data points by heart to perform all the four basic computations, whereas balanced nonal (9 digits from -4 to 4) requires only 31 data points, of which only 6 are for multiplication. The right-most column uses `-4 = d, -3 = c, -2 = b, and -1 = a as negative digits:



                    2*2 = 0*9 +4 =  4
                    2*3 = 1*9 -3 = 1c
                    2*4 = 1*9 -1 = 1a
                    3*3 = 1*9 +0 = 10
                    3*4 = 1*9 +3 = 13
                    4*4 = 2*9 -2 = 2b


                    The entire rest is either trivial or follows from symmetries. That's all the multiplication table your school kids need to learn!



                  • Because you can get both positive and negative carries, you get much less and smaller carries in long additions. They simply tend to cancel each other out.


                  • Because you have negative digits as well as positive ones, negative numbers are just an integral part of the system. In decimal, you have to decide which number is greater when doing a subtraction, then subtract the smaller number from the larger one, then reattach a sign to the result based on which of the two numbers was greater. In balanced systems you don't care which number is greater, you just do the subtraction. Then you look at the result and see whether it's positive or negative...



                  As a matter of fact, I once learned to use balanced nonal just for fun, and in general, it's indeed much easier to use than decimal.



                  My point is: To anyone who has been brought up calculating in a balanced numeral system, an unbalanced system would just feel so unimaginable awkward and cumbersome that they will basically think that ternary is the smallest base you can use. Because binary lacks the negative digits, how are you supposed to compute with that? What do you do when you subtract 5 from 2? You absolutely need a -1 for that!



                  As such, a society of people with a balanced numeral system background may conceivably settle on balanced ternary computers instead of binary ones. And once a chunk of nine balanced ternary digits has been generally accepted as the smallest unit of information exchange, no one will want to use 15 bits (what an awkward number!) to transmit the same amount of information in a binary fashion, with all the losses that would imply.



                  The result is basically a lock-in effect to balanced ternary that would keep people from using binary hardware.





                  Aside: Unbalanced decimal vs. balanced nonal



                  Here is a more detailed comparison between decimal and balanced nonal. I'm using a, b, c, d as the negative digits -1, -2, -3, -4 here, respectively:





                  • Negation



                    Here the learing effort for decimal is zero. For balanced nonal, you have to learn the following table with nine entries:



                            | d c b a 0 1 2 3 4
                    --------+------------------
                    inverse | 4 3 2 1 0 a b c d



                  • Addition



                    Decimal has the following addition table, the right table show the 45 entries that need to be learned:



                    + | 0  1  2  3  4  5  6  7  8  9    + | 0  1  2  3  4  5  6  7  8  9
                    --+----------------------------- --+-----------------------------
                    0 | 0 1 2 3 4 5 6 7 8 9 0 |
                    1 | 1 2 3 4 5 6 7 8 9 10 1 | 2
                    2 | 2 3 4 5 6 7 8 9 10 11 2 | 3 4
                    3 | 3 4 5 6 7 8 9 10 11 12 3 | 4 5 6
                    4 | 4 5 6 7 8 9 10 11 12 13 4 | 5 6 7 8
                    5 | 5 6 7 8 9 10 11 12 13 14 5 | 6 7 8 9 10
                    6 | 6 7 8 9 10 11 12 13 14 15 6 | 7 8 9 10 11 12
                    7 | 7 8 9 10 11 12 13 14 15 16 7 | 8 9 10 11 12 13 14
                    8 | 8 9 10 11 12 13 14 15 16 17 8 | 9 10 11 12 13 14 15 16
                    9 | 9 10 11 12 13 14 15 16 17 18 9 | 10 11 12 13 14 15 16 17 18


                    The same table for balanced nonal only has 16 entries that need to be learned:



                    + | d  c  b  a  0  1  2  3  4    + | d  c  b  a  0  1  2  3  4
                    --+-------------------------- --+--------------------------
                    d |a1 a2 a3 a4 d c b a 0 d |
                    c |a2 a3 a4 d c b a 0 1 c |
                    b |a3 a4 d c b a 0 1 2 b |
                    a |a4 d c b a 0 1 2 3 a |
                    0 | d c b a 0 1 2 3 4 0 |
                    1 | c b a 0 1 2 3 4 1d 1 | 2
                    2 | b a 0 1 2 3 4 1d 1c 2 | 1 3 4
                    3 | a 0 1 2 3 4 1d 1c 1b 3 | 1 2 4 1d 1c
                    4 | 0 1 2 3 4 1d 1c 1b 1a 4 | 1 2 3 1d 1c 1b 1a


                    Note the missing diagonal of zeros (a number plus its inverse is zero), and the missing upper left half (the sum of two numbers is the inverse of the sum of the inverse numbers).



                    For instance, to calculate b + d, you can easily derive the result as b + d = inv(2 + 4) = inv(1c) = a3.




                  • Multiplication



                    In decimal, you have to perform quite a bit of tough learning:



                    * | 0  1  2  3  4  5  6  7  8  9    * | 0  1  2  3  4  5  6  7  8  9
                    --+----------------------------- --+-----------------------------
                    0 | 0 0 0 0 0 0 0 0 0 0 0 |
                    1 | 0 1 2 3 4 5 6 7 8 9 1 |
                    2 | 0 2 4 6 8 10 12 14 16 18 2 | 4
                    3 | 0 3 6 9 12 15 18 21 24 27 3 | 6 9
                    4 | 0 4 8 12 16 20 24 28 32 36 4 | 8 12 16
                    5 | 0 5 10 15 20 25 30 35 40 45 5 | 10 15 20 25
                    6 | 0 6 12 18 24 30 36 42 48 54 6 | 12 18 24 30 36
                    7 | 0 7 14 21 28 35 42 49 56 63 7 | 14 21 28 35 42 49
                    8 | 0 8 16 24 32 40 48 56 64 72 8 | 16 24 32 40 48 56 64
                    9 | 0 9 18 27 36 45 54 63 72 81 9 | 18 27 36 45 54 63 72 81


                    But in balanced nonal, the table on the right is reduced heavily: The three quadrants on the lower left, the upper right and the upper left all follow from the lower right one via symmetry.



                    * | d  c  b  a  0  1  2  3  4    * | d  c  b  a  0  1  2  3  4
                    --+-------------------------- --+--------------------------
                    d |2b 13 1a 4 0 d a1 ac b2 d |
                    c |13 10 1c 3 0 c a3 a0 ac c |
                    b |1a 1c 4 2 0 b d a3 a1 b |
                    a | 4 3 2 1 0 a b c d a |
                    0 | 0 0 0 0 0 0 0 0 0 0 |
                    1 | d c b a 0 1 2 3 4 1 |
                    2 |a1 a3 d b 0 2 4 1c 1a 2 | 4
                    3 |ac a0 a3 c 0 3 1c 10 13 3 | 1c 10
                    4 |b2 ac a1 d 0 4 1a 13 2b 4 | 1a 13 2b


                    For instance, to calculate c*d, you can just do c*d = 3*4 = 13. Or for 2*b, you derive 2*b = inv(2*2) = inv(4) = d. It's really a piece of cake, once you are used to it.




                  Taking this all together, you need to learn




                  • for decimal:

                    0 inversions

                    45 summations

                    36 multiplications
                    Total: 81


                  • for balanced nonal:

                    9 inversions

                    16 summations

                    6 multiplications
                    Total: 31







                  share|improve this answer























                  • I can't quite figure how you get 81 data points for our decimal system. Care to elaborate?
                    – Wildcard
                    Nov 28 at 0:21






                  • 3




                    @Wildcard It's 45 data points for addition (1+1, 1+2, 1+3, ..., 2+2, 2+3, ..., 9+9) and 36 data points for multiplication (2*2, 2*3, 2*4, ..., 3*3, 3*4, ..., 9*9). I have removed the trivial additions with zero, the trivial multiplications with zero and one, and the half of the table that follows from commutativity.
                    – cmaster
                    Nov 28 at 9:05






                  • 1




                    @Wildcard For balanced nonal, it's similar, except that you need to add 9 data points for inversion. The summation table is reduced by the trivial additions with zero, the trivial additions that yield zero (x+(-x) = 0), by commutativity, and the sign symmetry (-x+(-y) = -(x+y)), so only 16 data points remain. For multiplication, since we already know inversion, multiplication with -1, 0, and 1 is trivial, multiplications with a negative factor follow from symmetry, so we are only left with the table for the digits 2, 3, and 4. Which yields the six data points I've shown in my answer.
                    – cmaster
                    Nov 28 at 9:12










                  • @Wildcard I have now updated my answer with a more detailed comparison. Hope you like it.
                    – cmaster
                    Nov 28 at 16:16















                  up vote
                  10
                  down vote













                  Toolforger has one thing right: Binary computers are the most efficient computing devices possible. Period. Ternary has no technological advantage, whatsoever.



                  However, I'm going to give a suggestion of how you can offset the disadvantage of ternary computing, to allow your society to actually use ternary computers instead of binary ones:



                  Your society has evolved to use a balanced numeral system.



                  Balanced numeral systems don't just use positive digits like we do, they use an equal number of negative and positive digits. As such, balanced ternary uses three digits for -1, 0, and 1 instead of the unbalanced 0, 1, and 2. This has several beneficial consequences:




                  • Balanced numeral systems have symmetries that unbalanced systems lack. Not only can you exploit commutativity when doing calculations (you know what 2+3 is, so you know what 3+2 is), but also symmetries based on sign: -3-2 = -(3+2), -3*2 = 3*-2, -3*-2 = 3*2, and 3*-2 = -(3*2).


                  • You have more computations with trivial outcome: x+(-x) = 0 and -1*x = -x.



                  • The effect is, that you have much less to learn when learning balanced numeral systems. For instance, unbalanced decimal requires you to learn 81 data points by heart to perform all the four basic computations, whereas balanced nonal (9 digits from -4 to 4) requires only 31 data points, of which only 6 are for multiplication. The right-most column uses `-4 = d, -3 = c, -2 = b, and -1 = a as negative digits:



                    2*2 = 0*9 +4 =  4
                    2*3 = 1*9 -3 = 1c
                    2*4 = 1*9 -1 = 1a
                    3*3 = 1*9 +0 = 10
                    3*4 = 1*9 +3 = 13
                    4*4 = 2*9 -2 = 2b


                    The entire rest is either trivial or follows from symmetries. That's all the multiplication table your school kids need to learn!



                  • Because you can get both positive and negative carries, you get much less and smaller carries in long additions. They simply tend to cancel each other out.


                  • Because you have negative digits as well as positive ones, negative numbers are just an integral part of the system. In decimal, you have to decide which number is greater when doing a subtraction, then subtract the smaller number from the larger one, then reattach a sign to the result based on which of the two numbers was greater. In balanced systems you don't care which number is greater, you just do the subtraction. Then you look at the result and see whether it's positive or negative...



                  As a matter of fact, I once learned to use balanced nonal just for fun, and in general, it's indeed much easier to use than decimal.



                  My point is: To anyone who has been brought up calculating in a balanced numeral system, an unbalanced system would just feel so unimaginable awkward and cumbersome that they will basically think that ternary is the smallest base you can use. Because binary lacks the negative digits, how are you supposed to compute with that? What do you do when you subtract 5 from 2? You absolutely need a -1 for that!



                  As such, a society of people with a balanced numeral system background may conceivably settle on balanced ternary computers instead of binary ones. And once a chunk of nine balanced ternary digits has been generally accepted as the smallest unit of information exchange, no one will want to use 15 bits (what an awkward number!) to transmit the same amount of information in a binary fashion, with all the losses that would imply.



                  The result is basically a lock-in effect to balanced ternary that would keep people from using binary hardware.





                  Aside: Unbalanced decimal vs. balanced nonal



                  Here is a more detailed comparison between decimal and balanced nonal. I'm using a, b, c, d as the negative digits -1, -2, -3, -4 here, respectively:





                  • Negation



                    Here the learing effort for decimal is zero. For balanced nonal, you have to learn the following table with nine entries:



                            | d c b a 0 1 2 3 4
                    --------+------------------
                    inverse | 4 3 2 1 0 a b c d



                  • Addition



                    Decimal has the following addition table, the right table show the 45 entries that need to be learned:



                    + | 0  1  2  3  4  5  6  7  8  9    + | 0  1  2  3  4  5  6  7  8  9
                    --+----------------------------- --+-----------------------------
                    0 | 0 1 2 3 4 5 6 7 8 9 0 |
                    1 | 1 2 3 4 5 6 7 8 9 10 1 | 2
                    2 | 2 3 4 5 6 7 8 9 10 11 2 | 3 4
                    3 | 3 4 5 6 7 8 9 10 11 12 3 | 4 5 6
                    4 | 4 5 6 7 8 9 10 11 12 13 4 | 5 6 7 8
                    5 | 5 6 7 8 9 10 11 12 13 14 5 | 6 7 8 9 10
                    6 | 6 7 8 9 10 11 12 13 14 15 6 | 7 8 9 10 11 12
                    7 | 7 8 9 10 11 12 13 14 15 16 7 | 8 9 10 11 12 13 14
                    8 | 8 9 10 11 12 13 14 15 16 17 8 | 9 10 11 12 13 14 15 16
                    9 | 9 10 11 12 13 14 15 16 17 18 9 | 10 11 12 13 14 15 16 17 18


                    The same table for balanced nonal only has 16 entries that need to be learned:



                    + | d  c  b  a  0  1  2  3  4    + | d  c  b  a  0  1  2  3  4
                    --+-------------------------- --+--------------------------
                    d |a1 a2 a3 a4 d c b a 0 d |
                    c |a2 a3 a4 d c b a 0 1 c |
                    b |a3 a4 d c b a 0 1 2 b |
                    a |a4 d c b a 0 1 2 3 a |
                    0 | d c b a 0 1 2 3 4 0 |
                    1 | c b a 0 1 2 3 4 1d 1 | 2
                    2 | b a 0 1 2 3 4 1d 1c 2 | 1 3 4
                    3 | a 0 1 2 3 4 1d 1c 1b 3 | 1 2 4 1d 1c
                    4 | 0 1 2 3 4 1d 1c 1b 1a 4 | 1 2 3 1d 1c 1b 1a


                    Note the missing diagonal of zeros (a number plus its inverse is zero), and the missing upper left half (the sum of two numbers is the inverse of the sum of the inverse numbers).



                    For instance, to calculate b + d, you can easily derive the result as b + d = inv(2 + 4) = inv(1c) = a3.




                  • Multiplication



                    In decimal, you have to perform quite a bit of tough learning:



                    * | 0  1  2  3  4  5  6  7  8  9    * | 0  1  2  3  4  5  6  7  8  9
                    --+----------------------------- --+-----------------------------
                    0 | 0 0 0 0 0 0 0 0 0 0 0 |
                    1 | 0 1 2 3 4 5 6 7 8 9 1 |
                    2 | 0 2 4 6 8 10 12 14 16 18 2 | 4
                    3 | 0 3 6 9 12 15 18 21 24 27 3 | 6 9
                    4 | 0 4 8 12 16 20 24 28 32 36 4 | 8 12 16
                    5 | 0 5 10 15 20 25 30 35 40 45 5 | 10 15 20 25
                    6 | 0 6 12 18 24 30 36 42 48 54 6 | 12 18 24 30 36
                    7 | 0 7 14 21 28 35 42 49 56 63 7 | 14 21 28 35 42 49
                    8 | 0 8 16 24 32 40 48 56 64 72 8 | 16 24 32 40 48 56 64
                    9 | 0 9 18 27 36 45 54 63 72 81 9 | 18 27 36 45 54 63 72 81


                    But in balanced nonal, the table on the right is reduced heavily: The three quadrants on the lower left, the upper right and the upper left all follow from the lower right one via symmetry.



                    * | d  c  b  a  0  1  2  3  4    * | d  c  b  a  0  1  2  3  4
                    --+-------------------------- --+--------------------------
                    d |2b 13 1a 4 0 d a1 ac b2 d |
                    c |13 10 1c 3 0 c a3 a0 ac c |
                    b |1a 1c 4 2 0 b d a3 a1 b |
                    a | 4 3 2 1 0 a b c d a |
                    0 | 0 0 0 0 0 0 0 0 0 0 |
                    1 | d c b a 0 1 2 3 4 1 |
                    2 |a1 a3 d b 0 2 4 1c 1a 2 | 4
                    3 |ac a0 a3 c 0 3 1c 10 13 3 | 1c 10
                    4 |b2 ac a1 d 0 4 1a 13 2b 4 | 1a 13 2b


                    For instance, to calculate c*d, you can just do c*d = 3*4 = 13. Or for 2*b, you derive 2*b = inv(2*2) = inv(4) = d. It's really a piece of cake, once you are used to it.




                  Taking this all together, you need to learn




                  • for decimal:

                    0 inversions

                    45 summations

                    36 multiplications
                    Total: 81


                  • for balanced nonal:

                    9 inversions

                    16 summations

                    6 multiplications
                    Total: 31







                  share|improve this answer























                  • I can't quite figure how you get 81 data points for our decimal system. Care to elaborate?
                    – Wildcard
                    Nov 28 at 0:21






                  • 3




                    @Wildcard It's 45 data points for addition (1+1, 1+2, 1+3, ..., 2+2, 2+3, ..., 9+9) and 36 data points for multiplication (2*2, 2*3, 2*4, ..., 3*3, 3*4, ..., 9*9). I have removed the trivial additions with zero, the trivial multiplications with zero and one, and the half of the table that follows from commutativity.
                    – cmaster
                    Nov 28 at 9:05






                  • 1




                    @Wildcard For balanced nonal, it's similar, except that you need to add 9 data points for inversion. The summation table is reduced by the trivial additions with zero, the trivial additions that yield zero (x+(-x) = 0), by commutativity, and the sign symmetry (-x+(-y) = -(x+y)), so only 16 data points remain. For multiplication, since we already know inversion, multiplication with -1, 0, and 1 is trivial, multiplications with a negative factor follow from symmetry, so we are only left with the table for the digits 2, 3, and 4. Which yields the six data points I've shown in my answer.
                    – cmaster
                    Nov 28 at 9:12










                  • @Wildcard I have now updated my answer with a more detailed comparison. Hope you like it.
                    – cmaster
                    Nov 28 at 16:16













                  up vote
                  10
                  down vote










                  up vote
                  10
                  down vote









                  Toolforger has one thing right: Binary computers are the most efficient computing devices possible. Period. Ternary has no technological advantage, whatsoever.



                  However, I'm going to give a suggestion of how you can offset the disadvantage of ternary computing, to allow your society to actually use ternary computers instead of binary ones:



                  Your society has evolved to use a balanced numeral system.



                  Balanced numeral systems don't just use positive digits like we do, they use an equal number of negative and positive digits. As such, balanced ternary uses three digits for -1, 0, and 1 instead of the unbalanced 0, 1, and 2. This has several beneficial consequences:




                  • Balanced numeral systems have symmetries that unbalanced systems lack. Not only can you exploit commutativity when doing calculations (you know what 2+3 is, so you know what 3+2 is), but also symmetries based on sign: -3-2 = -(3+2), -3*2 = 3*-2, -3*-2 = 3*2, and 3*-2 = -(3*2).


                  • You have more computations with trivial outcome: x+(-x) = 0 and -1*x = -x.



                  • The effect is, that you have much less to learn when learning balanced numeral systems. For instance, unbalanced decimal requires you to learn 81 data points by heart to perform all the four basic computations, whereas balanced nonal (9 digits from -4 to 4) requires only 31 data points, of which only 6 are for multiplication. The right-most column uses `-4 = d, -3 = c, -2 = b, and -1 = a as negative digits:



                    2*2 = 0*9 +4 =  4
                    2*3 = 1*9 -3 = 1c
                    2*4 = 1*9 -1 = 1a
                    3*3 = 1*9 +0 = 10
                    3*4 = 1*9 +3 = 13
                    4*4 = 2*9 -2 = 2b


                    The entire rest is either trivial or follows from symmetries. That's all the multiplication table your school kids need to learn!



                  • Because you can get both positive and negative carries, you get much less and smaller carries in long additions. They simply tend to cancel each other out.


                  • Because you have negative digits as well as positive ones, negative numbers are just an integral part of the system. In decimal, you have to decide which number is greater when doing a subtraction, then subtract the smaller number from the larger one, then reattach a sign to the result based on which of the two numbers was greater. In balanced systems you don't care which number is greater, you just do the subtraction. Then you look at the result and see whether it's positive or negative...



                  As a matter of fact, I once learned to use balanced nonal just for fun, and in general, it's indeed much easier to use than decimal.



                  My point is: To anyone who has been brought up calculating in a balanced numeral system, an unbalanced system would just feel so unimaginable awkward and cumbersome that they will basically think that ternary is the smallest base you can use. Because binary lacks the negative digits, how are you supposed to compute with that? What do you do when you subtract 5 from 2? You absolutely need a -1 for that!



                  As such, a society of people with a balanced numeral system background may conceivably settle on balanced ternary computers instead of binary ones. And once a chunk of nine balanced ternary digits has been generally accepted as the smallest unit of information exchange, no one will want to use 15 bits (what an awkward number!) to transmit the same amount of information in a binary fashion, with all the losses that would imply.



                  The result is basically a lock-in effect to balanced ternary that would keep people from using binary hardware.





                  Aside: Unbalanced decimal vs. balanced nonal



                  Here is a more detailed comparison between decimal and balanced nonal. I'm using a, b, c, d as the negative digits -1, -2, -3, -4 here, respectively:





                  • Negation



                    Here the learing effort for decimal is zero. For balanced nonal, you have to learn the following table with nine entries:



                            | d c b a 0 1 2 3 4
                    --------+------------------
                    inverse | 4 3 2 1 0 a b c d



                  • Addition



                    Decimal has the following addition table, the right table show the 45 entries that need to be learned:



                    + | 0  1  2  3  4  5  6  7  8  9    + | 0  1  2  3  4  5  6  7  8  9
                    --+----------------------------- --+-----------------------------
                    0 | 0 1 2 3 4 5 6 7 8 9 0 |
                    1 | 1 2 3 4 5 6 7 8 9 10 1 | 2
                    2 | 2 3 4 5 6 7 8 9 10 11 2 | 3 4
                    3 | 3 4 5 6 7 8 9 10 11 12 3 | 4 5 6
                    4 | 4 5 6 7 8 9 10 11 12 13 4 | 5 6 7 8
                    5 | 5 6 7 8 9 10 11 12 13 14 5 | 6 7 8 9 10
                    6 | 6 7 8 9 10 11 12 13 14 15 6 | 7 8 9 10 11 12
                    7 | 7 8 9 10 11 12 13 14 15 16 7 | 8 9 10 11 12 13 14
                    8 | 8 9 10 11 12 13 14 15 16 17 8 | 9 10 11 12 13 14 15 16
                    9 | 9 10 11 12 13 14 15 16 17 18 9 | 10 11 12 13 14 15 16 17 18


                    The same table for balanced nonal only has 16 entries that need to be learned:



                    + | d  c  b  a  0  1  2  3  4    + | d  c  b  a  0  1  2  3  4
                    --+-------------------------- --+--------------------------
                    d |a1 a2 a3 a4 d c b a 0 d |
                    c |a2 a3 a4 d c b a 0 1 c |
                    b |a3 a4 d c b a 0 1 2 b |
                    a |a4 d c b a 0 1 2 3 a |
                    0 | d c b a 0 1 2 3 4 0 |
                    1 | c b a 0 1 2 3 4 1d 1 | 2
                    2 | b a 0 1 2 3 4 1d 1c 2 | 1 3 4
                    3 | a 0 1 2 3 4 1d 1c 1b 3 | 1 2 4 1d 1c
                    4 | 0 1 2 3 4 1d 1c 1b 1a 4 | 1 2 3 1d 1c 1b 1a


                    Note the missing diagonal of zeros (a number plus its inverse is zero), and the missing upper left half (the sum of two numbers is the inverse of the sum of the inverse numbers).



                    For instance, to calculate b + d, you can easily derive the result as b + d = inv(2 + 4) = inv(1c) = a3.




                  • Multiplication



                    In decimal, you have to perform quite a bit of tough learning:



                    * | 0  1  2  3  4  5  6  7  8  9    * | 0  1  2  3  4  5  6  7  8  9
                    --+----------------------------- --+-----------------------------
                    0 | 0 0 0 0 0 0 0 0 0 0 0 |
                    1 | 0 1 2 3 4 5 6 7 8 9 1 |
                    2 | 0 2 4 6 8 10 12 14 16 18 2 | 4
                    3 | 0 3 6 9 12 15 18 21 24 27 3 | 6 9
                    4 | 0 4 8 12 16 20 24 28 32 36 4 | 8 12 16
                    5 | 0 5 10 15 20 25 30 35 40 45 5 | 10 15 20 25
                    6 | 0 6 12 18 24 30 36 42 48 54 6 | 12 18 24 30 36
                    7 | 0 7 14 21 28 35 42 49 56 63 7 | 14 21 28 35 42 49
                    8 | 0 8 16 24 32 40 48 56 64 72 8 | 16 24 32 40 48 56 64
                    9 | 0 9 18 27 36 45 54 63 72 81 9 | 18 27 36 45 54 63 72 81


                    But in balanced nonal, the table on the right is reduced heavily: The three quadrants on the lower left, the upper right and the upper left all follow from the lower right one via symmetry.



                    * | d  c  b  a  0  1  2  3  4    * | d  c  b  a  0  1  2  3  4
                    --+-------------------------- --+--------------------------
                    d |2b 13 1a 4 0 d a1 ac b2 d |
                    c |13 10 1c 3 0 c a3 a0 ac c |
                    b |1a 1c 4 2 0 b d a3 a1 b |
                    a | 4 3 2 1 0 a b c d a |
                    0 | 0 0 0 0 0 0 0 0 0 0 |
                    1 | d c b a 0 1 2 3 4 1 |
                    2 |a1 a3 d b 0 2 4 1c 1a 2 | 4
                    3 |ac a0 a3 c 0 3 1c 10 13 3 | 1c 10
                    4 |b2 ac a1 d 0 4 1a 13 2b 4 | 1a 13 2b


                    For instance, to calculate c*d, you can just do c*d = 3*4 = 13. Or for 2*b, you derive 2*b = inv(2*2) = inv(4) = d. It's really a piece of cake, once you are used to it.




                  Taking this all together, you need to learn




                  • for decimal:

                    0 inversions

                    45 summations

                    36 multiplications
                    Total: 81


                  • for balanced nonal:

                    9 inversions

                    16 summations

                    6 multiplications
                    Total: 31







                  share|improve this answer














                  Toolforger has one thing right: Binary computers are the most efficient computing devices possible. Period. Ternary has no technological advantage, whatsoever.



                  However, I'm going to give a suggestion of how you can offset the disadvantage of ternary computing, to allow your society to actually use ternary computers instead of binary ones:



                  Your society has evolved to use a balanced numeral system.



                  Balanced numeral systems don't just use positive digits like we do, they use an equal number of negative and positive digits. As such, balanced ternary uses three digits for -1, 0, and 1 instead of the unbalanced 0, 1, and 2. This has several beneficial consequences:




                  • Balanced numeral systems have symmetries that unbalanced systems lack. Not only can you exploit commutativity when doing calculations (you know what 2+3 is, so you know what 3+2 is), but also symmetries based on sign: -3-2 = -(3+2), -3*2 = 3*-2, -3*-2 = 3*2, and 3*-2 = -(3*2).


                  • You have more computations with trivial outcome: x+(-x) = 0 and -1*x = -x.



                  • The effect is, that you have much less to learn when learning balanced numeral systems. For instance, unbalanced decimal requires you to learn 81 data points by heart to perform all the four basic computations, whereas balanced nonal (9 digits from -4 to 4) requires only 31 data points, of which only 6 are for multiplication. The right-most column uses `-4 = d, -3 = c, -2 = b, and -1 = a as negative digits:



                    2*2 = 0*9 +4 =  4
                    2*3 = 1*9 -3 = 1c
                    2*4 = 1*9 -1 = 1a
                    3*3 = 1*9 +0 = 10
                    3*4 = 1*9 +3 = 13
                    4*4 = 2*9 -2 = 2b


                    The entire rest is either trivial or follows from symmetries. That's all the multiplication table your school kids need to learn!



                  • Because you can get both positive and negative carries, you get much less and smaller carries in long additions. They simply tend to cancel each other out.


                  • Because you have negative digits as well as positive ones, negative numbers are just an integral part of the system. In decimal, you have to decide which number is greater when doing a subtraction, then subtract the smaller number from the larger one, then reattach a sign to the result based on which of the two numbers was greater. In balanced systems you don't care which number is greater, you just do the subtraction. Then you look at the result and see whether it's positive or negative...



                  As a matter of fact, I once learned to use balanced nonal just for fun, and in general, it's indeed much easier to use than decimal.



                  My point is: To anyone who has been brought up calculating in a balanced numeral system, an unbalanced system would just feel so unimaginable awkward and cumbersome that they will basically think that ternary is the smallest base you can use. Because binary lacks the negative digits, how are you supposed to compute with that? What do you do when you subtract 5 from 2? You absolutely need a -1 for that!



                  As such, a society of people with a balanced numeral system background may conceivably settle on balanced ternary computers instead of binary ones. And once a chunk of nine balanced ternary digits has been generally accepted as the smallest unit of information exchange, no one will want to use 15 bits (what an awkward number!) to transmit the same amount of information in a binary fashion, with all the losses that would imply.



                  The result is basically a lock-in effect to balanced ternary that would keep people from using binary hardware.





                  Aside: Unbalanced decimal vs. balanced nonal



                  Here is a more detailed comparison between decimal and balanced nonal. I'm using a, b, c, d as the negative digits -1, -2, -3, -4 here, respectively:





                  • Negation



                    Here the learing effort for decimal is zero. For balanced nonal, you have to learn the following table with nine entries:



                            | d c b a 0 1 2 3 4
                    --------+------------------
                    inverse | 4 3 2 1 0 a b c d



                  • Addition



                    Decimal has the following addition table, the right table show the 45 entries that need to be learned:



                    + | 0  1  2  3  4  5  6  7  8  9    + | 0  1  2  3  4  5  6  7  8  9
                    --+----------------------------- --+-----------------------------
                    0 | 0 1 2 3 4 5 6 7 8 9 0 |
                    1 | 1 2 3 4 5 6 7 8 9 10 1 | 2
                    2 | 2 3 4 5 6 7 8 9 10 11 2 | 3 4
                    3 | 3 4 5 6 7 8 9 10 11 12 3 | 4 5 6
                    4 | 4 5 6 7 8 9 10 11 12 13 4 | 5 6 7 8
                    5 | 5 6 7 8 9 10 11 12 13 14 5 | 6 7 8 9 10
                    6 | 6 7 8 9 10 11 12 13 14 15 6 | 7 8 9 10 11 12
                    7 | 7 8 9 10 11 12 13 14 15 16 7 | 8 9 10 11 12 13 14
                    8 | 8 9 10 11 12 13 14 15 16 17 8 | 9 10 11 12 13 14 15 16
                    9 | 9 10 11 12 13 14 15 16 17 18 9 | 10 11 12 13 14 15 16 17 18


                    The same table for balanced nonal only has 16 entries that need to be learned:



                    + | d  c  b  a  0  1  2  3  4    + | d  c  b  a  0  1  2  3  4
                    --+-------------------------- --+--------------------------
                    d |a1 a2 a3 a4 d c b a 0 d |
                    c |a2 a3 a4 d c b a 0 1 c |
                    b |a3 a4 d c b a 0 1 2 b |
                    a |a4 d c b a 0 1 2 3 a |
                    0 | d c b a 0 1 2 3 4 0 |
                    1 | c b a 0 1 2 3 4 1d 1 | 2
                    2 | b a 0 1 2 3 4 1d 1c 2 | 1 3 4
                    3 | a 0 1 2 3 4 1d 1c 1b 3 | 1 2 4 1d 1c
                    4 | 0 1 2 3 4 1d 1c 1b 1a 4 | 1 2 3 1d 1c 1b 1a


                    Note the missing diagonal of zeros (a number plus its inverse is zero), and the missing upper left half (the sum of two numbers is the inverse of the sum of the inverse numbers).



                    For instance, to calculate b + d, you can easily derive the result as b + d = inv(2 + 4) = inv(1c) = a3.




                  • Multiplication



                    In decimal, you have to perform quite a bit of tough learning:



                    * | 0  1  2  3  4  5  6  7  8  9    * | 0  1  2  3  4  5  6  7  8  9
                    --+----------------------------- --+-----------------------------
                    0 | 0 0 0 0 0 0 0 0 0 0 0 |
                    1 | 0 1 2 3 4 5 6 7 8 9 1 |
                    2 | 0 2 4 6 8 10 12 14 16 18 2 | 4
                    3 | 0 3 6 9 12 15 18 21 24 27 3 | 6 9
                    4 | 0 4 8 12 16 20 24 28 32 36 4 | 8 12 16
                    5 | 0 5 10 15 20 25 30 35 40 45 5 | 10 15 20 25
                    6 | 0 6 12 18 24 30 36 42 48 54 6 | 12 18 24 30 36
                    7 | 0 7 14 21 28 35 42 49 56 63 7 | 14 21 28 35 42 49
                    8 | 0 8 16 24 32 40 48 56 64 72 8 | 16 24 32 40 48 56 64
                    9 | 0 9 18 27 36 45 54 63 72 81 9 | 18 27 36 45 54 63 72 81


                    But in balanced nonal, the table on the right is reduced heavily: The three quadrants on the lower left, the upper right and the upper left all follow from the lower right one via symmetry.



                    * | d  c  b  a  0  1  2  3  4    * | d  c  b  a  0  1  2  3  4
                    --+-------------------------- --+--------------------------
                    d |2b 13 1a 4 0 d a1 ac b2 d |
                    c |13 10 1c 3 0 c a3 a0 ac c |
                    b |1a 1c 4 2 0 b d a3 a1 b |
                    a | 4 3 2 1 0 a b c d a |
                    0 | 0 0 0 0 0 0 0 0 0 0 |
                    1 | d c b a 0 1 2 3 4 1 |
                    2 |a1 a3 d b 0 2 4 1c 1a 2 | 4
                    3 |ac a0 a3 c 0 3 1c 10 13 3 | 1c 10
                    4 |b2 ac a1 d 0 4 1a 13 2b 4 | 1a 13 2b


                    For instance, to calculate c*d, you can just do c*d = 3*4 = 13. Or for 2*b, you derive 2*b = inv(2*2) = inv(4) = d. It's really a piece of cake, once you are used to it.




                  Taking this all together, you need to learn




                  • for decimal:

                    0 inversions

                    45 summations

                    36 multiplications
                    Total: 81


                  • for balanced nonal:

                    9 inversions

                    16 summations

                    6 multiplications
                    Total: 31








                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Nov 28 at 22:19

























                  answered Nov 27 at 18:39









                  cmaster

                  2,899514




                  2,899514












                  • I can't quite figure how you get 81 data points for our decimal system. Care to elaborate?
                    – Wildcard
                    Nov 28 at 0:21






                  • 3




                    @Wildcard It's 45 data points for addition (1+1, 1+2, 1+3, ..., 2+2, 2+3, ..., 9+9) and 36 data points for multiplication (2*2, 2*3, 2*4, ..., 3*3, 3*4, ..., 9*9). I have removed the trivial additions with zero, the trivial multiplications with zero and one, and the half of the table that follows from commutativity.
                    – cmaster
                    Nov 28 at 9:05






                  • 1




                    @Wildcard For balanced nonal, it's similar, except that you need to add 9 data points for inversion. The summation table is reduced by the trivial additions with zero, the trivial additions that yield zero (x+(-x) = 0), by commutativity, and the sign symmetry (-x+(-y) = -(x+y)), so only 16 data points remain. For multiplication, since we already know inversion, multiplication with -1, 0, and 1 is trivial, multiplications with a negative factor follow from symmetry, so we are only left with the table for the digits 2, 3, and 4. Which yields the six data points I've shown in my answer.
                    – cmaster
                    Nov 28 at 9:12










                  • @Wildcard I have now updated my answer with a more detailed comparison. Hope you like it.
                    – cmaster
                    Nov 28 at 16:16


















                  • I can't quite figure how you get 81 data points for our decimal system. Care to elaborate?
                    – Wildcard
                    Nov 28 at 0:21






                  • 3




                    @Wildcard It's 45 data points for addition (1+1, 1+2, 1+3, ..., 2+2, 2+3, ..., 9+9) and 36 data points for multiplication (2*2, 2*3, 2*4, ..., 3*3, 3*4, ..., 9*9). I have removed the trivial additions with zero, the trivial multiplications with zero and one, and the half of the table that follows from commutativity.
                    – cmaster
                    Nov 28 at 9:05






                  • 1




                    @Wildcard For balanced nonal, it's similar, except that you need to add 9 data points for inversion. The summation table is reduced by the trivial additions with zero, the trivial additions that yield zero (x+(-x) = 0), by commutativity, and the sign symmetry (-x+(-y) = -(x+y)), so only 16 data points remain. For multiplication, since we already know inversion, multiplication with -1, 0, and 1 is trivial, multiplications with a negative factor follow from symmetry, so we are only left with the table for the digits 2, 3, and 4. Which yields the six data points I've shown in my answer.
                    – cmaster
                    Nov 28 at 9:12










                  • @Wildcard I have now updated my answer with a more detailed comparison. Hope you like it.
                    – cmaster
                    Nov 28 at 16:16
















                  I can't quite figure how you get 81 data points for our decimal system. Care to elaborate?
                  – Wildcard
                  Nov 28 at 0:21




                  I can't quite figure how you get 81 data points for our decimal system. Care to elaborate?
                  – Wildcard
                  Nov 28 at 0:21




                  3




                  3




                  @Wildcard It's 45 data points for addition (1+1, 1+2, 1+3, ..., 2+2, 2+3, ..., 9+9) and 36 data points for multiplication (2*2, 2*3, 2*4, ..., 3*3, 3*4, ..., 9*9). I have removed the trivial additions with zero, the trivial multiplications with zero and one, and the half of the table that follows from commutativity.
                  – cmaster
                  Nov 28 at 9:05




                  @Wildcard It's 45 data points for addition (1+1, 1+2, 1+3, ..., 2+2, 2+3, ..., 9+9) and 36 data points for multiplication (2*2, 2*3, 2*4, ..., 3*3, 3*4, ..., 9*9). I have removed the trivial additions with zero, the trivial multiplications with zero and one, and the half of the table that follows from commutativity.
                  – cmaster
                  Nov 28 at 9:05




                  1




                  1




                  @Wildcard For balanced nonal, it's similar, except that you need to add 9 data points for inversion. The summation table is reduced by the trivial additions with zero, the trivial additions that yield zero (x+(-x) = 0), by commutativity, and the sign symmetry (-x+(-y) = -(x+y)), so only 16 data points remain. For multiplication, since we already know inversion, multiplication with -1, 0, and 1 is trivial, multiplications with a negative factor follow from symmetry, so we are only left with the table for the digits 2, 3, and 4. Which yields the six data points I've shown in my answer.
                  – cmaster
                  Nov 28 at 9:12




                  @Wildcard For balanced nonal, it's similar, except that you need to add 9 data points for inversion. The summation table is reduced by the trivial additions with zero, the trivial additions that yield zero (x+(-x) = 0), by commutativity, and the sign symmetry (-x+(-y) = -(x+y)), so only 16 data points remain. For multiplication, since we already know inversion, multiplication with -1, 0, and 1 is trivial, multiplications with a negative factor follow from symmetry, so we are only left with the table for the digits 2, 3, and 4. Which yields the six data points I've shown in my answer.
                  – cmaster
                  Nov 28 at 9:12












                  @Wildcard I have now updated my answer with a more detailed comparison. Hope you like it.
                  – cmaster
                  Nov 28 at 16:16




                  @Wildcard I have now updated my answer with a more detailed comparison. Hope you like it.
                  – cmaster
                  Nov 28 at 16:16










                  up vote
                  9
                  down vote













                  Base-4



                  This might be a natural choice for a society that perfected digital communication before digital computation.



                  Digital signals are often transmitted (i.e., “passed through the analogue world”) using quadrature phase-shift keying, a special form of quadrature amplitude modulation. This is generally more performant and reliable than simple amplitude modulation, and more efficient than frequency modulation.



                  QPSK / QAM by default use four different states, or a multiple of four, as the fundamental unit of information. We usually interpret this as “it always transmits two bits at a time”, but if this method were to be standard before binary computers, we'd probably be used to measure information in quats (?) rather than bits.



                  Ultimately, the computers would at the lowest level probably end up looking a lot like our binary ones, but with usually two bits paired together to a fundamental “4-logical unit”. Unlike binary-coded decimal, this doesn't incur any overhead of unused binary states.



                  And it could actually make sense to QPSK-encode even the local communication between processor and memory etc. – wireless transmission everywhere!, thus making the components “base-4 for all that can be seen”.






                  share|improve this answer























                  • What might be considered a related interesting factoid is that DNA genetic code is also base-4. But, I wouldn't say this should have any relevance upon the development of computers.
                    – leftaroundabout
                    Nov 27 at 13:59










                  • Re your comment: biological computers, perhaps?
                    – Wildcard
                    Nov 28 at 0:18










                  • QAM waveforms are very different from binary PCM, though, and QAM can encode many numbers of states, not just 4
                    – endolith
                    Nov 28 at 18:56






                  • 1




                    @endolith yeah, but QAM usually encodes some 2²ⁿ states, i.e. a multiple of four, doesn't it? And anyways before computers, they'd plausibly not go over four states, i.e. QPSK – anything more is basically just exploiting excessive SNR to get higher data rate, but prior to computers they'd probably just engineer the SNR to be just enough for QPSK, which already gives the essential advantage.
                    – leftaroundabout
                    Nov 28 at 23:36















                  up vote
                  9
                  down vote













                  Base-4



                  This might be a natural choice for a society that perfected digital communication before digital computation.



                  Digital signals are often transmitted (i.e., “passed through the analogue world”) using quadrature phase-shift keying, a special form of quadrature amplitude modulation. This is generally more performant and reliable than simple amplitude modulation, and more efficient than frequency modulation.



                  QPSK / QAM by default use four different states, or a multiple of four, as the fundamental unit of information. We usually interpret this as “it always transmits two bits at a time”, but if this method were to be standard before binary computers, we'd probably be used to measure information in quats (?) rather than bits.



                  Ultimately, the computers would at the lowest level probably end up looking a lot like our binary ones, but with usually two bits paired together to a fundamental “4-logical unit”. Unlike binary-coded decimal, this doesn't incur any overhead of unused binary states.



                  And it could actually make sense to QPSK-encode even the local communication between processor and memory etc. – wireless transmission everywhere!, thus making the components “base-4 for all that can be seen”.






                  share|improve this answer























                  • What might be considered a related interesting factoid is that DNA genetic code is also base-4. But, I wouldn't say this should have any relevance upon the development of computers.
                    – leftaroundabout
                    Nov 27 at 13:59










                  • Re your comment: biological computers, perhaps?
                    – Wildcard
                    Nov 28 at 0:18










                  • QAM waveforms are very different from binary PCM, though, and QAM can encode many numbers of states, not just 4
                    – endolith
                    Nov 28 at 18:56






                  • 1




                    @endolith yeah, but QAM usually encodes some 2²ⁿ states, i.e. a multiple of four, doesn't it? And anyways before computers, they'd plausibly not go over four states, i.e. QPSK – anything more is basically just exploiting excessive SNR to get higher data rate, but prior to computers they'd probably just engineer the SNR to be just enough for QPSK, which already gives the essential advantage.
                    – leftaroundabout
                    Nov 28 at 23:36













                  up vote
                  9
                  down vote










                  up vote
                  9
                  down vote









                  Base-4



                  This might be a natural choice for a society that perfected digital communication before digital computation.



                  Digital signals are often transmitted (i.e., “passed through the analogue world”) using quadrature phase-shift keying, a special form of quadrature amplitude modulation. This is generally more performant and reliable than simple amplitude modulation, and more efficient than frequency modulation.



                  QPSK / QAM by default use four different states, or a multiple of four, as the fundamental unit of information. We usually interpret this as “it always transmits two bits at a time”, but if this method were to be standard before binary computers, we'd probably be used to measure information in quats (?) rather than bits.



                  Ultimately, the computers would at the lowest level probably end up looking a lot like our binary ones, but with usually two bits paired together to a fundamental “4-logical unit”. Unlike binary-coded decimal, this doesn't incur any overhead of unused binary states.



                  And it could actually make sense to QPSK-encode even the local communication between processor and memory etc. – wireless transmission everywhere!, thus making the components “base-4 for all that can be seen”.






                  share|improve this answer














                  Base-4



                  This might be a natural choice for a society that perfected digital communication before digital computation.



                  Digital signals are often transmitted (i.e., “passed through the analogue world”) using quadrature phase-shift keying, a special form of quadrature amplitude modulation. This is generally more performant and reliable than simple amplitude modulation, and more efficient than frequency modulation.



                  QPSK / QAM by default use four different states, or a multiple of four, as the fundamental unit of information. We usually interpret this as “it always transmits two bits at a time”, but if this method were to be standard before binary computers, we'd probably be used to measure information in quats (?) rather than bits.



                  Ultimately, the computers would at the lowest level probably end up looking a lot like our binary ones, but with usually two bits paired together to a fundamental “4-logical unit”. Unlike binary-coded decimal, this doesn't incur any overhead of unused binary states.



                  And it could actually make sense to QPSK-encode even the local communication between processor and memory etc. – wireless transmission everywhere!, thus making the components “base-4 for all that can be seen”.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Nov 27 at 14:01

























                  answered Nov 27 at 13:48









                  leftaroundabout

                  656510




                  656510












                  • What might be considered a related interesting factoid is that DNA genetic code is also base-4. But, I wouldn't say this should have any relevance upon the development of computers.
                    – leftaroundabout
                    Nov 27 at 13:59










                  • Re your comment: biological computers, perhaps?
                    – Wildcard
                    Nov 28 at 0:18










                  • QAM waveforms are very different from binary PCM, though, and QAM can encode many numbers of states, not just 4
                    – endolith
                    Nov 28 at 18:56






                  • 1




                    @endolith yeah, but QAM usually encodes some 2²ⁿ states, i.e. a multiple of four, doesn't it? And anyways before computers, they'd plausibly not go over four states, i.e. QPSK – anything more is basically just exploiting excessive SNR to get higher data rate, but prior to computers they'd probably just engineer the SNR to be just enough for QPSK, which already gives the essential advantage.
                    – leftaroundabout
                    Nov 28 at 23:36


















                  • What might be considered a related interesting factoid is that DNA genetic code is also base-4. But, I wouldn't say this should have any relevance upon the development of computers.
                    – leftaroundabout
                    Nov 27 at 13:59










                  • Re your comment: biological computers, perhaps?
                    – Wildcard
                    Nov 28 at 0:18










                  • QAM waveforms are very different from binary PCM, though, and QAM can encode many numbers of states, not just 4
                    – endolith
                    Nov 28 at 18:56






                  • 1




                    @endolith yeah, but QAM usually encodes some 2²ⁿ states, i.e. a multiple of four, doesn't it? And anyways before computers, they'd plausibly not go over four states, i.e. QPSK – anything more is basically just exploiting excessive SNR to get higher data rate, but prior to computers they'd probably just engineer the SNR to be just enough for QPSK, which already gives the essential advantage.
                    – leftaroundabout
                    Nov 28 at 23:36
















                  What might be considered a related interesting factoid is that DNA genetic code is also base-4. But, I wouldn't say this should have any relevance upon the development of computers.
                  – leftaroundabout
                  Nov 27 at 13:59




                  What might be considered a related interesting factoid is that DNA genetic code is also base-4. But, I wouldn't say this should have any relevance upon the development of computers.
                  – leftaroundabout
                  Nov 27 at 13:59












                  Re your comment: biological computers, perhaps?
                  – Wildcard
                  Nov 28 at 0:18




                  Re your comment: biological computers, perhaps?
                  – Wildcard
                  Nov 28 at 0:18












                  QAM waveforms are very different from binary PCM, though, and QAM can encode many numbers of states, not just 4
                  – endolith
                  Nov 28 at 18:56




                  QAM waveforms are very different from binary PCM, though, and QAM can encode many numbers of states, not just 4
                  – endolith
                  Nov 28 at 18:56




                  1




                  1




                  @endolith yeah, but QAM usually encodes some 2²ⁿ states, i.e. a multiple of four, doesn't it? And anyways before computers, they'd plausibly not go over four states, i.e. QPSK – anything more is basically just exploiting excessive SNR to get higher data rate, but prior to computers they'd probably just engineer the SNR to be just enough for QPSK, which already gives the essential advantage.
                  – leftaroundabout
                  Nov 28 at 23:36




                  @endolith yeah, but QAM usually encodes some 2²ⁿ states, i.e. a multiple of four, doesn't it? And anyways before computers, they'd plausibly not go over four states, i.e. QPSK – anything more is basically just exploiting excessive SNR to get higher data rate, but prior to computers they'd probably just engineer the SNR to be just enough for QPSK, which already gives the essential advantage.
                  – leftaroundabout
                  Nov 28 at 23:36










                  up vote
                  6
                  down vote













                  It's almost completely irrelevant.



                  The binary nature of computers is very very very rarely relevant in practice. Just about the only practical situation where the binary nature of computers is relevant is when doing sophisticated error bounds analysis of floating point calculations.



                  In actual reality, there are quite a few aspects of modern computing which do not even rely on binary representations. For example, we all like SSDs, don't we? Well, modern cheap consumer SSDs are not binary devices — they use multi-level cells as their fundamental building blocks. For another example, we all like Gigabit Ethernet, don't we? Well, the unit of transmission in Gigabit Ethernet is an indivisible 8-bit octet (transmitted as a 10-bit symbol, but hey, who counts).



                  No modern computer (all right, hardly any modern computer) can access one bit of storage individually. Most usually, the smallest accessible unit is an octet of eight bits, which can be seen as an indivisible atom with 256 possible values. (And even this is not really true; what exactly is the atomic unit of memory access varies from architecture to architecture. Access to one individual bit is not atomic on any computer I know of.)



                  Donald Knuth's Art of Computer Programming, which the closest thing we have to a fundamental text in informatics, famously uses the fictional MIX computer for practical examples — and one of the charming characteristics of MIX is that one does not know whether it's a binary or a decimal computer.



                  What actually matters is that modern computers are digital — in the computer, everything is a number. That the numbers in question are represented by tuples of octets is a detail which very rarely has any practical or even theoretical importance.






                  share|improve this answer























                  • I don't see how this answers the question. In OP's world, the base is important enough that it needs a back story. At best, this should be a short comment on the question.
                    – pipe
                    Nov 27 at 9:58












                  • @pipe: The point is that the physical representation is irrelevant, and it cannot be made relevant. (And, unlike other fields where I provide answers or comments on this site, this is actually my profession.) FYI, base 2 is relevant only when referring to the physical representation; any computer can store and manipulate numbers in any base you wish. For example, base 10,000 is moderately popular for arbitary-precision computations.
                    – AlexP
                    Nov 27 at 11:12








                  • 1




                    Yes, I'm well aware that the base is not "important", but apparently OP thinks that the physical representation is interesting, interesting enough to want to build a world around it. Obviously such a computer could work in any base as well. Also, a lot of things in our world today is designed the way they are because computers are predominately base 2, for example ANSI art (256 codepoints, 16 colors), 16-bit CD audio, JPEG quantization blocks being 8x8 pixels affecting the quality of all images online, etc..
                    – pipe
                    Nov 27 at 12:16










                  • @pipe: On the other hand, the Web-safe color palette has 6³ = 216 colors, the CD sampling rate is 44,100 samples/second, bandwidth is measured in powers of ten bits per second, Unicode has room for 1,114,112 = 17×2⁴ code points...
                    – AlexP
                    Nov 27 at 12:23















                  up vote
                  6
                  down vote













                  It's almost completely irrelevant.



                  The binary nature of computers is very very very rarely relevant in practice. Just about the only practical situation where the binary nature of computers is relevant is when doing sophisticated error bounds analysis of floating point calculations.



                  In actual reality, there are quite a few aspects of modern computing which do not even rely on binary representations. For example, we all like SSDs, don't we? Well, modern cheap consumer SSDs are not binary devices — they use multi-level cells as their fundamental building blocks. For another example, we all like Gigabit Ethernet, don't we? Well, the unit of transmission in Gigabit Ethernet is an indivisible 8-bit octet (transmitted as a 10-bit symbol, but hey, who counts).



                  No modern computer (all right, hardly any modern computer) can access one bit of storage individually. Most usually, the smallest accessible unit is an octet of eight bits, which can be seen as an indivisible atom with 256 possible values. (And even this is not really true; what exactly is the atomic unit of memory access varies from architecture to architecture. Access to one individual bit is not atomic on any computer I know of.)



                  Donald Knuth's Art of Computer Programming, which the closest thing we have to a fundamental text in informatics, famously uses the fictional MIX computer for practical examples — and one of the charming characteristics of MIX is that one does not know whether it's a binary or a decimal computer.



                  What actually matters is that modern computers are digital — in the computer, everything is a number. That the numbers in question are represented by tuples of octets is a detail which very rarely has any practical or even theoretical importance.






                  share|improve this answer























                  • I don't see how this answers the question. In OP's world, the base is important enough that it needs a back story. At best, this should be a short comment on the question.
                    – pipe
                    Nov 27 at 9:58












                  • @pipe: The point is that the physical representation is irrelevant, and it cannot be made relevant. (And, unlike other fields where I provide answers or comments on this site, this is actually my profession.) FYI, base 2 is relevant only when referring to the physical representation; any computer can store and manipulate numbers in any base you wish. For example, base 10,000 is moderately popular for arbitary-precision computations.
                    – AlexP
                    Nov 27 at 11:12








                  • 1




                    Yes, I'm well aware that the base is not "important", but apparently OP thinks that the physical representation is interesting, interesting enough to want to build a world around it. Obviously such a computer could work in any base as well. Also, a lot of things in our world today is designed the way they are because computers are predominately base 2, for example ANSI art (256 codepoints, 16 colors), 16-bit CD audio, JPEG quantization blocks being 8x8 pixels affecting the quality of all images online, etc..
                    – pipe
                    Nov 27 at 12:16










                  • @pipe: On the other hand, the Web-safe color palette has 6³ = 216 colors, the CD sampling rate is 44,100 samples/second, bandwidth is measured in powers of ten bits per second, Unicode has room for 1,114,112 = 17×2⁴ code points...
                    – AlexP
                    Nov 27 at 12:23













                  up vote
                  6
                  down vote










                  up vote
                  6
                  down vote









                  It's almost completely irrelevant.



                  The binary nature of computers is very very very rarely relevant in practice. Just about the only practical situation where the binary nature of computers is relevant is when doing sophisticated error bounds analysis of floating point calculations.



                  In actual reality, there are quite a few aspects of modern computing which do not even rely on binary representations. For example, we all like SSDs, don't we? Well, modern cheap consumer SSDs are not binary devices — they use multi-level cells as their fundamental building blocks. For another example, we all like Gigabit Ethernet, don't we? Well, the unit of transmission in Gigabit Ethernet is an indivisible 8-bit octet (transmitted as a 10-bit symbol, but hey, who counts).



                  No modern computer (all right, hardly any modern computer) can access one bit of storage individually. Most usually, the smallest accessible unit is an octet of eight bits, which can be seen as an indivisible atom with 256 possible values. (And even this is not really true; what exactly is the atomic unit of memory access varies from architecture to architecture. Access to one individual bit is not atomic on any computer I know of.)



                  Donald Knuth's Art of Computer Programming, which the closest thing we have to a fundamental text in informatics, famously uses the fictional MIX computer for practical examples — and one of the charming characteristics of MIX is that one does not know whether it's a binary or a decimal computer.



                  What actually matters is that modern computers are digital — in the computer, everything is a number. That the numbers in question are represented by tuples of octets is a detail which very rarely has any practical or even theoretical importance.






                  share|improve this answer














                  It's almost completely irrelevant.



                  The binary nature of computers is very very very rarely relevant in practice. Just about the only practical situation where the binary nature of computers is relevant is when doing sophisticated error bounds analysis of floating point calculations.



                  In actual reality, there are quite a few aspects of modern computing which do not even rely on binary representations. For example, we all like SSDs, don't we? Well, modern cheap consumer SSDs are not binary devices — they use multi-level cells as their fundamental building blocks. For another example, we all like Gigabit Ethernet, don't we? Well, the unit of transmission in Gigabit Ethernet is an indivisible 8-bit octet (transmitted as a 10-bit symbol, but hey, who counts).



                  No modern computer (all right, hardly any modern computer) can access one bit of storage individually. Most usually, the smallest accessible unit is an octet of eight bits, which can be seen as an indivisible atom with 256 possible values. (And even this is not really true; what exactly is the atomic unit of memory access varies from architecture to architecture. Access to one individual bit is not atomic on any computer I know of.)



                  Donald Knuth's Art of Computer Programming, which the closest thing we have to a fundamental text in informatics, famously uses the fictional MIX computer for practical examples — and one of the charming characteristics of MIX is that one does not know whether it's a binary or a decimal computer.



                  What actually matters is that modern computers are digital — in the computer, everything is a number. That the numbers in question are represented by tuples of octets is a detail which very rarely has any practical or even theoretical importance.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Nov 30 at 18:22









                  endolith

                  1135




                  1135










                  answered Nov 27 at 5:53









                  AlexP

                  34.7k779134




                  34.7k779134












                  • I don't see how this answers the question. In OP's world, the base is important enough that it needs a back story. At best, this should be a short comment on the question.
                    – pipe
                    Nov 27 at 9:58












                  • @pipe: The point is that the physical representation is irrelevant, and it cannot be made relevant. (And, unlike other fields where I provide answers or comments on this site, this is actually my profession.) FYI, base 2 is relevant only when referring to the physical representation; any computer can store and manipulate numbers in any base you wish. For example, base 10,000 is moderately popular for arbitary-precision computations.
                    – AlexP
                    Nov 27 at 11:12








                  • 1




                    Yes, I'm well aware that the base is not "important", but apparently OP thinks that the physical representation is interesting, interesting enough to want to build a world around it. Obviously such a computer could work in any base as well. Also, a lot of things in our world today is designed the way they are because computers are predominately base 2, for example ANSI art (256 codepoints, 16 colors), 16-bit CD audio, JPEG quantization blocks being 8x8 pixels affecting the quality of all images online, etc..
                    – pipe
                    Nov 27 at 12:16










                  • @pipe: On the other hand, the Web-safe color palette has 6³ = 216 colors, the CD sampling rate is 44,100 samples/second, bandwidth is measured in powers of ten bits per second, Unicode has room for 1,114,112 = 17×2⁴ code points...
                    – AlexP
                    Nov 27 at 12:23


















                  • I don't see how this answers the question. In OP's world, the base is important enough that it needs a back story. At best, this should be a short comment on the question.
                    – pipe
                    Nov 27 at 9:58












                  • @pipe: The point is that the physical representation is irrelevant, and it cannot be made relevant. (And, unlike other fields where I provide answers or comments on this site, this is actually my profession.) FYI, base 2 is relevant only when referring to the physical representation; any computer can store and manipulate numbers in any base you wish. For example, base 10,000 is moderately popular for arbitary-precision computations.
                    – AlexP
                    Nov 27 at 11:12








                  • 1




                    Yes, I'm well aware that the base is not "important", but apparently OP thinks that the physical representation is interesting, interesting enough to want to build a world around it. Obviously such a computer could work in any base as well. Also, a lot of things in our world today is designed the way they are because computers are predominately base 2, for example ANSI art (256 codepoints, 16 colors), 16-bit CD audio, JPEG quantization blocks being 8x8 pixels affecting the quality of all images online, etc..
                    – pipe
                    Nov 27 at 12:16










                  • @pipe: On the other hand, the Web-safe color palette has 6³ = 216 colors, the CD sampling rate is 44,100 samples/second, bandwidth is measured in powers of ten bits per second, Unicode has room for 1,114,112 = 17×2⁴ code points...
                    – AlexP
                    Nov 27 at 12:23
















                  I don't see how this answers the question. In OP's world, the base is important enough that it needs a back story. At best, this should be a short comment on the question.
                  – pipe
                  Nov 27 at 9:58






                  I don't see how this answers the question. In OP's world, the base is important enough that it needs a back story. At best, this should be a short comment on the question.
                  – pipe
                  Nov 27 at 9:58














                  @pipe: The point is that the physical representation is irrelevant, and it cannot be made relevant. (And, unlike other fields where I provide answers or comments on this site, this is actually my profession.) FYI, base 2 is relevant only when referring to the physical representation; any computer can store and manipulate numbers in any base you wish. For example, base 10,000 is moderately popular for arbitary-precision computations.
                  – AlexP
                  Nov 27 at 11:12






                  @pipe: The point is that the physical representation is irrelevant, and it cannot be made relevant. (And, unlike other fields where I provide answers or comments on this site, this is actually my profession.) FYI, base 2 is relevant only when referring to the physical representation; any computer can store and manipulate numbers in any base you wish. For example, base 10,000 is moderately popular for arbitary-precision computations.
                  – AlexP
                  Nov 27 at 11:12






                  1




                  1




                  Yes, I'm well aware that the base is not "important", but apparently OP thinks that the physical representation is interesting, interesting enough to want to build a world around it. Obviously such a computer could work in any base as well. Also, a lot of things in our world today is designed the way they are because computers are predominately base 2, for example ANSI art (256 codepoints, 16 colors), 16-bit CD audio, JPEG quantization blocks being 8x8 pixels affecting the quality of all images online, etc..
                  – pipe
                  Nov 27 at 12:16




                  Yes, I'm well aware that the base is not "important", but apparently OP thinks that the physical representation is interesting, interesting enough to want to build a world around it. Obviously such a computer could work in any base as well. Also, a lot of things in our world today is designed the way they are because computers are predominately base 2, for example ANSI art (256 codepoints, 16 colors), 16-bit CD audio, JPEG quantization blocks being 8x8 pixels affecting the quality of all images online, etc..
                  – pipe
                  Nov 27 at 12:16












                  @pipe: On the other hand, the Web-safe color palette has 6³ = 216 colors, the CD sampling rate is 44,100 samples/second, bandwidth is measured in powers of ten bits per second, Unicode has room for 1,114,112 = 17×2⁴ code points...
                  – AlexP
                  Nov 27 at 12:23




                  @pipe: On the other hand, the Web-safe color palette has 6³ = 216 colors, the CD sampling rate is 44,100 samples/second, bandwidth is measured in powers of ten bits per second, Unicode has room for 1,114,112 = 17×2⁴ code points...
                  – AlexP
                  Nov 27 at 12:23










                  up vote
                  5
                  down vote













                  Get rid of George Boole, inventor of Boolean Algebra, probably the main mathematical foundation of computer logic.



                  Without Boolean Algebra, regular algebra would give quite an edge to decimal computers, even if you needed three to four times as much hardware per digit.



                  There's no need to kill him, just have something happen that stops his research or get him interested in another field instead.






                  share|improve this answer





















                  • Comments are not for extended discussion; this conversation has been moved to chat.
                    – L.Dutch
                    Nov 27 at 19:27















                  up vote
                  5
                  down vote













                  Get rid of George Boole, inventor of Boolean Algebra, probably the main mathematical foundation of computer logic.



                  Without Boolean Algebra, regular algebra would give quite an edge to decimal computers, even if you needed three to four times as much hardware per digit.



                  There's no need to kill him, just have something happen that stops his research or get him interested in another field instead.






                  share|improve this answer





















                  • Comments are not for extended discussion; this conversation has been moved to chat.
                    – L.Dutch
                    Nov 27 at 19:27













                  up vote
                  5
                  down vote










                  up vote
                  5
                  down vote









                  Get rid of George Boole, inventor of Boolean Algebra, probably the main mathematical foundation of computer logic.



                  Without Boolean Algebra, regular algebra would give quite an edge to decimal computers, even if you needed three to four times as much hardware per digit.



                  There's no need to kill him, just have something happen that stops his research or get him interested in another field instead.






                  share|improve this answer












                  Get rid of George Boole, inventor of Boolean Algebra, probably the main mathematical foundation of computer logic.



                  Without Boolean Algebra, regular algebra would give quite an edge to decimal computers, even if you needed three to four times as much hardware per digit.



                  There's no need to kill him, just have something happen that stops his research or get him interested in another field instead.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 26 at 14:29









                  Emilio M Bumachar

                  4,4081121




                  4,4081121












                  • Comments are not for extended discussion; this conversation has been moved to chat.
                    – L.Dutch
                    Nov 27 at 19:27


















                  • Comments are not for extended discussion; this conversation has been moved to chat.
                    – L.Dutch
                    Nov 27 at 19:27
















                  Comments are not for extended discussion; this conversation has been moved to chat.
                  – L.Dutch
                  Nov 27 at 19:27




                  Comments are not for extended discussion; this conversation has been moved to chat.
                  – L.Dutch
                  Nov 27 at 19:27










                  up vote
                  5
                  down vote













                  EDIT - On reading the answer by L.Dutch, I see that there is an energy-saving argument for using trinary. I'd be interested to find out how theoretically true that is. Crucially the OP talks about transistors rather than thermionic valves and that could make a difference. There are also other energy questions to address other than the simple switching of a transistor. It would be good to know the extent of this saving and any extra cost associated with building and maintaining the hardware. Heat dissipation may also be an issue.



                  I remain open-minded as well as interested in this approach.





                  I don't think there is a historical justification for your premise as far as transistors are concerned so instead, I will just say:



                  The minimum historical change is No Electronics



                  It's possible to use other bases but just a really bad idea.



                  IBM 1620 Model I, Level H




                  IBM 1620 data processing machine with IBM 1627 plotter, on display at
                  the 1962 Seattle World's Fair The IBM 1620 was announced by IBM on
                  October 21, 1959,[1] and marketed as an inexpensive "scientific
                  computer".[2] After a total production of about two thousand machines,
                  it was withdrawn on November 19, 1970. Modified versions of the 1620
                  were used as the CPU of the IBM 1710 and IBM 1720 Industrial Process
                  Control Systems (making it the first digital computer considered
                  reliable enough for real-time process control of factory
                  equipment)[citation needed].



                  Being variable word length decimal, as opposed to
                  fixed-word-length pure binary, made it an especially attractive first
                  computer to learn on – and hundreds of thousands of students had their
                  first experiences with a computer on the IBM 1620.



                  https://en.wikipedia.org/wiki/IBM_1620




                  The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level.



                  Reasoning



                  Any other electronic system than binary will soon evolve into binary because it depends on digital electronics.



                  It is commonly supposed, by those not in the know, that zero voltage represent a binary zero and some arbitrary voltage, e.g. 5 volts, represents a 1. However in the real world these voltages are never so precise. It is much easier to have two ranges with a specified changeover point.



                  Having to maintain say ten different voltages for ten different digits would be incredibly expensive to make, unreliable and not worth the effort.



                  So your minimum historical change is No Electronics.






                  share|improve this answer



















                  • 3




                    This answer does not meet the question's "Transistors were invented in this alternate timeline, in the 1950s" constraint.
                    – RonJohn
                    Nov 26 at 14:29






                  • 2




                    @RonJohn - I understand your point. I suppose I could have answered "Given the conditions you propose, there is no historical answer to your question." Maybe I'll change the wording to add that extra sentence.
                    – chasly from UK
                    Nov 26 at 14:38










                  • The electronics of the IBM1620 were entirely binary. The decimal capabilities depended on they way real-world data was encoded in binary, not on hardware that somehow used 10 different states to represent decimal numbers.
                    – alephzero
                    Nov 27 at 23:46










                  • @alephzero - If you read my answer carefully you'll see I said, "The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level."
                    – chasly from UK
                    Nov 28 at 0:39

















                  up vote
                  5
                  down vote













                  EDIT - On reading the answer by L.Dutch, I see that there is an energy-saving argument for using trinary. I'd be interested to find out how theoretically true that is. Crucially the OP talks about transistors rather than thermionic valves and that could make a difference. There are also other energy questions to address other than the simple switching of a transistor. It would be good to know the extent of this saving and any extra cost associated with building and maintaining the hardware. Heat dissipation may also be an issue.



                  I remain open-minded as well as interested in this approach.





                  I don't think there is a historical justification for your premise as far as transistors are concerned so instead, I will just say:



                  The minimum historical change is No Electronics



                  It's possible to use other bases but just a really bad idea.



                  IBM 1620 Model I, Level H




                  IBM 1620 data processing machine with IBM 1627 plotter, on display at
                  the 1962 Seattle World's Fair The IBM 1620 was announced by IBM on
                  October 21, 1959,[1] and marketed as an inexpensive "scientific
                  computer".[2] After a total production of about two thousand machines,
                  it was withdrawn on November 19, 1970. Modified versions of the 1620
                  were used as the CPU of the IBM 1710 and IBM 1720 Industrial Process
                  Control Systems (making it the first digital computer considered
                  reliable enough for real-time process control of factory
                  equipment)[citation needed].



                  Being variable word length decimal, as opposed to
                  fixed-word-length pure binary, made it an especially attractive first
                  computer to learn on – and hundreds of thousands of students had their
                  first experiences with a computer on the IBM 1620.



                  https://en.wikipedia.org/wiki/IBM_1620




                  The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level.



                  Reasoning



                  Any other electronic system than binary will soon evolve into binary because it depends on digital electronics.



                  It is commonly supposed, by those not in the know, that zero voltage represent a binary zero and some arbitrary voltage, e.g. 5 volts, represents a 1. However in the real world these voltages are never so precise. It is much easier to have two ranges with a specified changeover point.



                  Having to maintain say ten different voltages for ten different digits would be incredibly expensive to make, unreliable and not worth the effort.



                  So your minimum historical change is No Electronics.






                  share|improve this answer



















                  • 3




                    This answer does not meet the question's "Transistors were invented in this alternate timeline, in the 1950s" constraint.
                    – RonJohn
                    Nov 26 at 14:29






                  • 2




                    @RonJohn - I understand your point. I suppose I could have answered "Given the conditions you propose, there is no historical answer to your question." Maybe I'll change the wording to add that extra sentence.
                    – chasly from UK
                    Nov 26 at 14:38










                  • The electronics of the IBM1620 were entirely binary. The decimal capabilities depended on they way real-world data was encoded in binary, not on hardware that somehow used 10 different states to represent decimal numbers.
                    – alephzero
                    Nov 27 at 23:46










                  • @alephzero - If you read my answer carefully you'll see I said, "The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level."
                    – chasly from UK
                    Nov 28 at 0:39















                  up vote
                  5
                  down vote










                  up vote
                  5
                  down vote









                  EDIT - On reading the answer by L.Dutch, I see that there is an energy-saving argument for using trinary. I'd be interested to find out how theoretically true that is. Crucially the OP talks about transistors rather than thermionic valves and that could make a difference. There are also other energy questions to address other than the simple switching of a transistor. It would be good to know the extent of this saving and any extra cost associated with building and maintaining the hardware. Heat dissipation may also be an issue.



                  I remain open-minded as well as interested in this approach.





                  I don't think there is a historical justification for your premise as far as transistors are concerned so instead, I will just say:



                  The minimum historical change is No Electronics



                  It's possible to use other bases but just a really bad idea.



                  IBM 1620 Model I, Level H




                  IBM 1620 data processing machine with IBM 1627 plotter, on display at
                  the 1962 Seattle World's Fair The IBM 1620 was announced by IBM on
                  October 21, 1959,[1] and marketed as an inexpensive "scientific
                  computer".[2] After a total production of about two thousand machines,
                  it was withdrawn on November 19, 1970. Modified versions of the 1620
                  were used as the CPU of the IBM 1710 and IBM 1720 Industrial Process
                  Control Systems (making it the first digital computer considered
                  reliable enough for real-time process control of factory
                  equipment)[citation needed].



                  Being variable word length decimal, as opposed to
                  fixed-word-length pure binary, made it an especially attractive first
                  computer to learn on – and hundreds of thousands of students had their
                  first experiences with a computer on the IBM 1620.



                  https://en.wikipedia.org/wiki/IBM_1620




                  The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level.



                  Reasoning



                  Any other electronic system than binary will soon evolve into binary because it depends on digital electronics.



                  It is commonly supposed, by those not in the know, that zero voltage represent a binary zero and some arbitrary voltage, e.g. 5 volts, represents a 1. However in the real world these voltages are never so precise. It is much easier to have two ranges with a specified changeover point.



                  Having to maintain say ten different voltages for ten different digits would be incredibly expensive to make, unreliable and not worth the effort.



                  So your minimum historical change is No Electronics.






                  share|improve this answer














                  EDIT - On reading the answer by L.Dutch, I see that there is an energy-saving argument for using trinary. I'd be interested to find out how theoretically true that is. Crucially the OP talks about transistors rather than thermionic valves and that could make a difference. There are also other energy questions to address other than the simple switching of a transistor. It would be good to know the extent of this saving and any extra cost associated with building and maintaining the hardware. Heat dissipation may also be an issue.



                  I remain open-minded as well as interested in this approach.





                  I don't think there is a historical justification for your premise as far as transistors are concerned so instead, I will just say:



                  The minimum historical change is No Electronics



                  It's possible to use other bases but just a really bad idea.



                  IBM 1620 Model I, Level H




                  IBM 1620 data processing machine with IBM 1627 plotter, on display at
                  the 1962 Seattle World's Fair The IBM 1620 was announced by IBM on
                  October 21, 1959,[1] and marketed as an inexpensive "scientific
                  computer".[2] After a total production of about two thousand machines,
                  it was withdrawn on November 19, 1970. Modified versions of the 1620
                  were used as the CPU of the IBM 1710 and IBM 1720 Industrial Process
                  Control Systems (making it the first digital computer considered
                  reliable enough for real-time process control of factory
                  equipment)[citation needed].



                  Being variable word length decimal, as opposed to
                  fixed-word-length pure binary, made it an especially attractive first
                  computer to learn on – and hundreds of thousands of students had their
                  first experiences with a computer on the IBM 1620.



                  https://en.wikipedia.org/wiki/IBM_1620




                  The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level.



                  Reasoning



                  Any other electronic system than binary will soon evolve into binary because it depends on digital electronics.



                  It is commonly supposed, by those not in the know, that zero voltage represent a binary zero and some arbitrary voltage, e.g. 5 volts, represents a 1. However in the real world these voltages are never so precise. It is much easier to have two ranges with a specified changeover point.



                  Having to maintain say ten different voltages for ten different digits would be incredibly expensive to make, unreliable and not worth the effort.



                  So your minimum historical change is No Electronics.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Nov 26 at 14:40

























                  answered Nov 26 at 13:34









                  chasly from UK

                  10.7k348103




                  10.7k348103








                  • 3




                    This answer does not meet the question's "Transistors were invented in this alternate timeline, in the 1950s" constraint.
                    – RonJohn
                    Nov 26 at 14:29






                  • 2




                    @RonJohn - I understand your point. I suppose I could have answered "Given the conditions you propose, there is no historical answer to your question." Maybe I'll change the wording to add that extra sentence.
                    – chasly from UK
                    Nov 26 at 14:38










                  • The electronics of the IBM1620 were entirely binary. The decimal capabilities depended on they way real-world data was encoded in binary, not on hardware that somehow used 10 different states to represent decimal numbers.
                    – alephzero
                    Nov 27 at 23:46










                  • @alephzero - If you read my answer carefully you'll see I said, "The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level."
                    – chasly from UK
                    Nov 28 at 0:39
















                  • 3




                    This answer does not meet the question's "Transistors were invented in this alternate timeline, in the 1950s" constraint.
                    – RonJohn
                    Nov 26 at 14:29






                  • 2




                    @RonJohn - I understand your point. I suppose I could have answered "Given the conditions you propose, there is no historical answer to your question." Maybe I'll change the wording to add that extra sentence.
                    – chasly from UK
                    Nov 26 at 14:38










                  • The electronics of the IBM1620 were entirely binary. The decimal capabilities depended on they way real-world data was encoded in binary, not on hardware that somehow used 10 different states to represent decimal numbers.
                    – alephzero
                    Nov 27 at 23:46










                  • @alephzero - If you read my answer carefully you'll see I said, "The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level."
                    – chasly from UK
                    Nov 28 at 0:39










                  3




                  3




                  This answer does not meet the question's "Transistors were invented in this alternate timeline, in the 1950s" constraint.
                  – RonJohn
                  Nov 26 at 14:29




                  This answer does not meet the question's "Transistors were invented in this alternate timeline, in the 1950s" constraint.
                  – RonJohn
                  Nov 26 at 14:29




                  2




                  2




                  @RonJohn - I understand your point. I suppose I could have answered "Given the conditions you propose, there is no historical answer to your question." Maybe I'll change the wording to add that extra sentence.
                  – chasly from UK
                  Nov 26 at 14:38




                  @RonJohn - I understand your point. I suppose I could have answered "Given the conditions you propose, there is no historical answer to your question." Maybe I'll change the wording to add that extra sentence.
                  – chasly from UK
                  Nov 26 at 14:38












                  The electronics of the IBM1620 were entirely binary. The decimal capabilities depended on they way real-world data was encoded in binary, not on hardware that somehow used 10 different states to represent decimal numbers.
                  – alephzero
                  Nov 27 at 23:46




                  The electronics of the IBM1620 were entirely binary. The decimal capabilities depended on they way real-world data was encoded in binary, not on hardware that somehow used 10 different states to represent decimal numbers.
                  – alephzero
                  Nov 27 at 23:46












                  @alephzero - If you read my answer carefully you'll see I said, "The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level."
                  – chasly from UK
                  Nov 28 at 0:39






                  @alephzero - If you read my answer carefully you'll see I said, "The key phrase there is variable word length decimal which is a real faff and actually still uses binary at the electronic level."
                  – chasly from UK
                  Nov 28 at 0:39












                  up vote
                  4
                  down vote













                  As I understand it, early tribes used base 12 and it's a lot more flexible than 10--they had a way to count to 12 by counting knuckles to get up to 60 on two hands pretty easily which is the basis of our "Degrees".



                  10-finger-counters supposedly defeated the base 12ers but kept their time system and degree-based trigonometry.



                  If the base 12ers had won, a three-state computer might have made a LOT more sense (Binary might have actually looked silly). In this case A byte would probably be 8 tri-state bits (let's call it 8/3) which would perfectly fit 2 base-12 digits instead of our 8/2 layout which always had a bit of a mis-match.



                  We tried to cope with our mismatch by using BCD and throwing away 6 states from each nibble (1/2 byte) for a more close approximation of base 10 which gave us a "Pure" math without all these weird binary oddities you get (like how in base 10, 1 byte holds 256 states, 2 bytes hold 65536, etc)



                  With 3/8, base 12ers would have no mismatch, it would be really clean. Round 3-bit numbers would often look like nice base12 numbers: 1 byte would hold 100 states, and 2 bytes would hold 10000, etc.



                  So can you change the numeric base of your book? Shouldn't come up too often :) It would be fun to even number pages in base 12... complete submersion.






                  share|improve this answer





















                  • Base 6 as almost all of the advantages of Base 12, but requires a much smaller addition and multiplication tables. And it can be built by pairing binary elements with ternary elements. Also, 6^9 ~ 10^7 (10,077,696).
                    – Jasper
                    Nov 27 at 20:42















                  up vote
                  4
                  down vote













                  As I understand it, early tribes used base 12 and it's a lot more flexible than 10--they had a way to count to 12 by counting knuckles to get up to 60 on two hands pretty easily which is the basis of our "Degrees".



                  10-finger-counters supposedly defeated the base 12ers but kept their time system and degree-based trigonometry.



                  If the base 12ers had won, a three-state computer might have made a LOT more sense (Binary might have actually looked silly). In this case A byte would probably be 8 tri-state bits (let's call it 8/3) which would perfectly fit 2 base-12 digits instead of our 8/2 layout which always had a bit of a mis-match.



                  We tried to cope with our mismatch by using BCD and throwing away 6 states from each nibble (1/2 byte) for a more close approximation of base 10 which gave us a "Pure" math without all these weird binary oddities you get (like how in base 10, 1 byte holds 256 states, 2 bytes hold 65536, etc)



                  With 3/8, base 12ers would have no mismatch, it would be really clean. Round 3-bit numbers would often look like nice base12 numbers: 1 byte would hold 100 states, and 2 bytes would hold 10000, etc.



                  So can you change the numeric base of your book? Shouldn't come up too often :) It would be fun to even number pages in base 12... complete submersion.






                  share|improve this answer





















                  • Base 6 as almost all of the advantages of Base 12, but requires a much smaller addition and multiplication tables. And it can be built by pairing binary elements with ternary elements. Also, 6^9 ~ 10^7 (10,077,696).
                    – Jasper
                    Nov 27 at 20:42













                  up vote
                  4
                  down vote










                  up vote
                  4
                  down vote









                  As I understand it, early tribes used base 12 and it's a lot more flexible than 10--they had a way to count to 12 by counting knuckles to get up to 60 on two hands pretty easily which is the basis of our "Degrees".



                  10-finger-counters supposedly defeated the base 12ers but kept their time system and degree-based trigonometry.



                  If the base 12ers had won, a three-state computer might have made a LOT more sense (Binary might have actually looked silly). In this case A byte would probably be 8 tri-state bits (let's call it 8/3) which would perfectly fit 2 base-12 digits instead of our 8/2 layout which always had a bit of a mis-match.



                  We tried to cope with our mismatch by using BCD and throwing away 6 states from each nibble (1/2 byte) for a more close approximation of base 10 which gave us a "Pure" math without all these weird binary oddities you get (like how in base 10, 1 byte holds 256 states, 2 bytes hold 65536, etc)



                  With 3/8, base 12ers would have no mismatch, it would be really clean. Round 3-bit numbers would often look like nice base12 numbers: 1 byte would hold 100 states, and 2 bytes would hold 10000, etc.



                  So can you change the numeric base of your book? Shouldn't come up too often :) It would be fun to even number pages in base 12... complete submersion.






                  share|improve this answer












                  As I understand it, early tribes used base 12 and it's a lot more flexible than 10--they had a way to count to 12 by counting knuckles to get up to 60 on two hands pretty easily which is the basis of our "Degrees".



                  10-finger-counters supposedly defeated the base 12ers but kept their time system and degree-based trigonometry.



                  If the base 12ers had won, a three-state computer might have made a LOT more sense (Binary might have actually looked silly). In this case A byte would probably be 8 tri-state bits (let's call it 8/3) which would perfectly fit 2 base-12 digits instead of our 8/2 layout which always had a bit of a mis-match.



                  We tried to cope with our mismatch by using BCD and throwing away 6 states from each nibble (1/2 byte) for a more close approximation of base 10 which gave us a "Pure" math without all these weird binary oddities you get (like how in base 10, 1 byte holds 256 states, 2 bytes hold 65536, etc)



                  With 3/8, base 12ers would have no mismatch, it would be really clean. Round 3-bit numbers would often look like nice base12 numbers: 1 byte would hold 100 states, and 2 bytes would hold 10000, etc.



                  So can you change the numeric base of your book? Shouldn't come up too often :) It would be fun to even number pages in base 12... complete submersion.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 26 at 19:22









                  Bill K

                  97157




                  97157












                  • Base 6 as almost all of the advantages of Base 12, but requires a much smaller addition and multiplication tables. And it can be built by pairing binary elements with ternary elements. Also, 6^9 ~ 10^7 (10,077,696).
                    – Jasper
                    Nov 27 at 20:42


















                  • Base 6 as almost all of the advantages of Base 12, but requires a much smaller addition and multiplication tables. And it can be built by pairing binary elements with ternary elements. Also, 6^9 ~ 10^7 (10,077,696).
                    – Jasper
                    Nov 27 at 20:42
















                  Base 6 as almost all of the advantages of Base 12, but requires a much smaller addition and multiplication tables. And it can be built by pairing binary elements with ternary elements. Also, 6^9 ~ 10^7 (10,077,696).
                  – Jasper
                  Nov 27 at 20:42




                  Base 6 as almost all of the advantages of Base 12, but requires a much smaller addition and multiplication tables. And it can be built by pairing binary elements with ternary elements. Also, 6^9 ~ 10^7 (10,077,696).
                  – Jasper
                  Nov 27 at 20:42










                  up vote
                  3
                  down vote













                  Decimal computers.



                  Modern computers are, indeed, binary. Binary is the classification of an electrical signal as occupying one of two states, conditional on the voltage. For the sake of simplicity, you could say that in a 5V system, anything signal above 4V is a '1' and everything else is a '0'. Once a signal has been confined to two states, it's pretty easy to apply Boolean math, which was already well-explored ahead of computers. Binary was an easy choice for computers because so much work was already done in the area of Boolean algebra.



                  When we needed to increase the range of numbers, we added more signals. Two signals (two bits) could represent 4 distinct values. 3 bits - 8 values, and so-on. But what if, instead of adding more signals to expand our values, we simply divided the existing signals up more. In a 5V system, one signal could represent a number from 1-10 if we divide up the voltage. 0-0.25 volts = 0. 0.25-0.50 volts = 1. 0.50-0.75 volts = 2, etc. In theory, each signal would carry 5x the data a binary signal could. But why stop there? Why not split each signal into 100 distinct values?



                  Well, for the same reason we never went further than binary - environmental interference and lack of precision components. You need to be able to precisely measure the voltages to determine the value, and if those voltages change, your system becomes unreliable. All types of factors can affect electrical voltages, RF, temperature, humidity, metal density, etc. As components age, their tolerances tend to decrease.



                  Any number of things could have changed this - if you use a different medium - light, for example, interference isn't a concern. This is exactly why fiber-optics can carry so much more data than electrical connections.



                  The discovery of a room-temperature superconductor could also have allowed different computers to become standard. A superconductor doesn't lose electrons to heat. This means you could pump more voltage through a system without fear of overheating, requiring less precise components and less (no) cooling.



                  So, in-short, binary computers dominate because of physical limitations related to electricity and the the wealth of knowledge (Boolean Algebra) that was already available when vacuum tubes, transistors, and semiconductors came about. Change any of those factors, and binary computers may never have been.






                  share|improve this answer

























                    up vote
                    3
                    down vote













                    Decimal computers.



                    Modern computers are, indeed, binary. Binary is the classification of an electrical signal as occupying one of two states, conditional on the voltage. For the sake of simplicity, you could say that in a 5V system, anything signal above 4V is a '1' and everything else is a '0'. Once a signal has been confined to two states, it's pretty easy to apply Boolean math, which was already well-explored ahead of computers. Binary was an easy choice for computers because so much work was already done in the area of Boolean algebra.



                    When we needed to increase the range of numbers, we added more signals. Two signals (two bits) could represent 4 distinct values. 3 bits - 8 values, and so-on. But what if, instead of adding more signals to expand our values, we simply divided the existing signals up more. In a 5V system, one signal could represent a number from 1-10 if we divide up the voltage. 0-0.25 volts = 0. 0.25-0.50 volts = 1. 0.50-0.75 volts = 2, etc. In theory, each signal would carry 5x the data a binary signal could. But why stop there? Why not split each signal into 100 distinct values?



                    Well, for the same reason we never went further than binary - environmental interference and lack of precision components. You need to be able to precisely measure the voltages to determine the value, and if those voltages change, your system becomes unreliable. All types of factors can affect electrical voltages, RF, temperature, humidity, metal density, etc. As components age, their tolerances tend to decrease.



                    Any number of things could have changed this - if you use a different medium - light, for example, interference isn't a concern. This is exactly why fiber-optics can carry so much more data than electrical connections.



                    The discovery of a room-temperature superconductor could also have allowed different computers to become standard. A superconductor doesn't lose electrons to heat. This means you could pump more voltage through a system without fear of overheating, requiring less precise components and less (no) cooling.



                    So, in-short, binary computers dominate because of physical limitations related to electricity and the the wealth of knowledge (Boolean Algebra) that was already available when vacuum tubes, transistors, and semiconductors came about. Change any of those factors, and binary computers may never have been.






                    share|improve this answer























                      up vote
                      3
                      down vote










                      up vote
                      3
                      down vote









                      Decimal computers.



                      Modern computers are, indeed, binary. Binary is the classification of an electrical signal as occupying one of two states, conditional on the voltage. For the sake of simplicity, you could say that in a 5V system, anything signal above 4V is a '1' and everything else is a '0'. Once a signal has been confined to two states, it's pretty easy to apply Boolean math, which was already well-explored ahead of computers. Binary was an easy choice for computers because so much work was already done in the area of Boolean algebra.



                      When we needed to increase the range of numbers, we added more signals. Two signals (two bits) could represent 4 distinct values. 3 bits - 8 values, and so-on. But what if, instead of adding more signals to expand our values, we simply divided the existing signals up more. In a 5V system, one signal could represent a number from 1-10 if we divide up the voltage. 0-0.25 volts = 0. 0.25-0.50 volts = 1. 0.50-0.75 volts = 2, etc. In theory, each signal would carry 5x the data a binary signal could. But why stop there? Why not split each signal into 100 distinct values?



                      Well, for the same reason we never went further than binary - environmental interference and lack of precision components. You need to be able to precisely measure the voltages to determine the value, and if those voltages change, your system becomes unreliable. All types of factors can affect electrical voltages, RF, temperature, humidity, metal density, etc. As components age, their tolerances tend to decrease.



                      Any number of things could have changed this - if you use a different medium - light, for example, interference isn't a concern. This is exactly why fiber-optics can carry so much more data than electrical connections.



                      The discovery of a room-temperature superconductor could also have allowed different computers to become standard. A superconductor doesn't lose electrons to heat. This means you could pump more voltage through a system without fear of overheating, requiring less precise components and less (no) cooling.



                      So, in-short, binary computers dominate because of physical limitations related to electricity and the the wealth of knowledge (Boolean Algebra) that was already available when vacuum tubes, transistors, and semiconductors came about. Change any of those factors, and binary computers may never have been.






                      share|improve this answer












                      Decimal computers.



                      Modern computers are, indeed, binary. Binary is the classification of an electrical signal as occupying one of two states, conditional on the voltage. For the sake of simplicity, you could say that in a 5V system, anything signal above 4V is a '1' and everything else is a '0'. Once a signal has been confined to two states, it's pretty easy to apply Boolean math, which was already well-explored ahead of computers. Binary was an easy choice for computers because so much work was already done in the area of Boolean algebra.



                      When we needed to increase the range of numbers, we added more signals. Two signals (two bits) could represent 4 distinct values. 3 bits - 8 values, and so-on. But what if, instead of adding more signals to expand our values, we simply divided the existing signals up more. In a 5V system, one signal could represent a number from 1-10 if we divide up the voltage. 0-0.25 volts = 0. 0.25-0.50 volts = 1. 0.50-0.75 volts = 2, etc. In theory, each signal would carry 5x the data a binary signal could. But why stop there? Why not split each signal into 100 distinct values?



                      Well, for the same reason we never went further than binary - environmental interference and lack of precision components. You need to be able to precisely measure the voltages to determine the value, and if those voltages change, your system becomes unreliable. All types of factors can affect electrical voltages, RF, temperature, humidity, metal density, etc. As components age, their tolerances tend to decrease.



                      Any number of things could have changed this - if you use a different medium - light, for example, interference isn't a concern. This is exactly why fiber-optics can carry so much more data than electrical connections.



                      The discovery of a room-temperature superconductor could also have allowed different computers to become standard. A superconductor doesn't lose electrons to heat. This means you could pump more voltage through a system without fear of overheating, requiring less precise components and less (no) cooling.



                      So, in-short, binary computers dominate because of physical limitations related to electricity and the the wealth of knowledge (Boolean Algebra) that was already available when vacuum tubes, transistors, and semiconductors came about. Change any of those factors, and binary computers may never have been.







                      share|improve this answer












                      share|improve this answer



                      share|improve this answer










                      answered Nov 26 at 19:44









                      Robear

                      1311




                      1311






















                          up vote
                          3
                          down vote













                          In the late 1950s analog computers were developed using a hydraulic technology called fluidics. Fluidic processing is still used in automatic transmissions, although newer designs are hybrid electronic/fluidic systems.






                          share|improve this answer

















                          • 2




                            Can you explain more about this? Describe fluidic processing, link to more info, and then explain why it might have surpassed binary digital computing?
                            – kingledion
                            Nov 27 at 15:20















                          up vote
                          3
                          down vote













                          In the late 1950s analog computers were developed using a hydraulic technology called fluidics. Fluidic processing is still used in automatic transmissions, although newer designs are hybrid electronic/fluidic systems.






                          share|improve this answer

















                          • 2




                            Can you explain more about this? Describe fluidic processing, link to more info, and then explain why it might have surpassed binary digital computing?
                            – kingledion
                            Nov 27 at 15:20













                          up vote
                          3
                          down vote










                          up vote
                          3
                          down vote









                          In the late 1950s analog computers were developed using a hydraulic technology called fluidics. Fluidic processing is still used in automatic transmissions, although newer designs are hybrid electronic/fluidic systems.






                          share|improve this answer












                          In the late 1950s analog computers were developed using a hydraulic technology called fluidics. Fluidic processing is still used in automatic transmissions, although newer designs are hybrid electronic/fluidic systems.







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered Nov 26 at 23:22









                          Nik Pfirsig

                          311




                          311








                          • 2




                            Can you explain more about this? Describe fluidic processing, link to more info, and then explain why it might have surpassed binary digital computing?
                            – kingledion
                            Nov 27 at 15:20














                          • 2




                            Can you explain more about this? Describe fluidic processing, link to more info, and then explain why it might have surpassed binary digital computing?
                            – kingledion
                            Nov 27 at 15:20








                          2




                          2




                          Can you explain more about this? Describe fluidic processing, link to more info, and then explain why it might have surpassed binary digital computing?
                          – kingledion
                          Nov 27 at 15:20




                          Can you explain more about this? Describe fluidic processing, link to more info, and then explain why it might have surpassed binary digital computing?
                          – kingledion
                          Nov 27 at 15:20










                          up vote
                          2
                          down vote













                          Hypercomputation



                          According to Wikipedia Hypercomputation is defined to be the following:




                          Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.



                          The Church–Turing thesis states that any "effectively computable" function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not effectively computable in the Church–Turing sense.



                          Technically the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of useful, rather than random, uncomputable functions.




                          What this means is that Hypercomputation can do things computers cannot do. Not in terms of scope limitations such as the ability to access things on a network but rather what can and cannot be fundamentally solved as a mathematical problem.



                          Consider this. Can a computer store the square root of 2 and operate on it? Well maybe because it could store the coefficients of the polynomial whose solution is that square root and then index the solutions to that polynomial. Alright, so we can the represent so called algebraic numbers (at least I believe so). What about all real numbers? Euler's constant and pi are likely candidates for being unrepresentable in any meaningful sense using binary. We can approximate but we cannot have perfect representations. We could have pi be a special symbol as well as e and just increase the symbolic set. Still not good enough. That's the primary thing that hops to mind to me at least. The ability to digitally compute any real number with perfect precision.



                          This would be a reason for such a society to never discover binary computers being useful. At some point we switched from analog to binary because of electrical needs and signal stuff. I honestly do not know the details. We modeled the modern notion of processor and other things loosely off of the notion of a Turing Machine which was ultimately the form way of discussing computability which was kind of a multi faceted convergence of sorts. There was the idea of something being human computable and then theoretically computable. The rough abstract definition used for many years ended up converging with that of the notion of the Turing Machine. There was also the set theory concept of something or other (I don't recall the name) that ended up also converging to defining the same exact same concept of "computable". All of these converging basically meant it was said and done. That is what we as a society (or even as the human race for that matter) were able to come up with as a notion of what is and is not programmable. However, that is the convergence of possibly over 3000 years of mathematical development possibly beginning as far back in concept as Euclid when he formalized the most basic concepts of theorems and axioms. Sure math existed but it was just a tool. Nobody had a formal notion of it. Things are just obvious and known. If Hypercomputation is possible for humans to do (rather than it just being a thing limited to machines) then all it would take is one genius in the entire history of math to crack that. I'd say it is a reasonable thing for an alternate history.






                          share|improve this answer

























                            up vote
                            2
                            down vote













                            Hypercomputation



                            According to Wikipedia Hypercomputation is defined to be the following:




                            Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.



                            The Church–Turing thesis states that any "effectively computable" function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not effectively computable in the Church–Turing sense.



                            Technically the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of useful, rather than random, uncomputable functions.




                            What this means is that Hypercomputation can do things computers cannot do. Not in terms of scope limitations such as the ability to access things on a network but rather what can and cannot be fundamentally solved as a mathematical problem.



                            Consider this. Can a computer store the square root of 2 and operate on it? Well maybe because it could store the coefficients of the polynomial whose solution is that square root and then index the solutions to that polynomial. Alright, so we can the represent so called algebraic numbers (at least I believe so). What about all real numbers? Euler's constant and pi are likely candidates for being unrepresentable in any meaningful sense using binary. We can approximate but we cannot have perfect representations. We could have pi be a special symbol as well as e and just increase the symbolic set. Still not good enough. That's the primary thing that hops to mind to me at least. The ability to digitally compute any real number with perfect precision.



                            This would be a reason for such a society to never discover binary computers being useful. At some point we switched from analog to binary because of electrical needs and signal stuff. I honestly do not know the details. We modeled the modern notion of processor and other things loosely off of the notion of a Turing Machine which was ultimately the form way of discussing computability which was kind of a multi faceted convergence of sorts. There was the idea of something being human computable and then theoretically computable. The rough abstract definition used for many years ended up converging with that of the notion of the Turing Machine. There was also the set theory concept of something or other (I don't recall the name) that ended up also converging to defining the same exact same concept of "computable". All of these converging basically meant it was said and done. That is what we as a society (or even as the human race for that matter) were able to come up with as a notion of what is and is not programmable. However, that is the convergence of possibly over 3000 years of mathematical development possibly beginning as far back in concept as Euclid when he formalized the most basic concepts of theorems and axioms. Sure math existed but it was just a tool. Nobody had a formal notion of it. Things are just obvious and known. If Hypercomputation is possible for humans to do (rather than it just being a thing limited to machines) then all it would take is one genius in the entire history of math to crack that. I'd say it is a reasonable thing for an alternate history.






                            share|improve this answer























                              up vote
                              2
                              down vote










                              up vote
                              2
                              down vote









                              Hypercomputation



                              According to Wikipedia Hypercomputation is defined to be the following:




                              Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.



                              The Church–Turing thesis states that any "effectively computable" function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not effectively computable in the Church–Turing sense.



                              Technically the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of useful, rather than random, uncomputable functions.




                              What this means is that Hypercomputation can do things computers cannot do. Not in terms of scope limitations such as the ability to access things on a network but rather what can and cannot be fundamentally solved as a mathematical problem.



                              Consider this. Can a computer store the square root of 2 and operate on it? Well maybe because it could store the coefficients of the polynomial whose solution is that square root and then index the solutions to that polynomial. Alright, so we can the represent so called algebraic numbers (at least I believe so). What about all real numbers? Euler's constant and pi are likely candidates for being unrepresentable in any meaningful sense using binary. We can approximate but we cannot have perfect representations. We could have pi be a special symbol as well as e and just increase the symbolic set. Still not good enough. That's the primary thing that hops to mind to me at least. The ability to digitally compute any real number with perfect precision.



                              This would be a reason for such a society to never discover binary computers being useful. At some point we switched from analog to binary because of electrical needs and signal stuff. I honestly do not know the details. We modeled the modern notion of processor and other things loosely off of the notion of a Turing Machine which was ultimately the form way of discussing computability which was kind of a multi faceted convergence of sorts. There was the idea of something being human computable and then theoretically computable. The rough abstract definition used for many years ended up converging with that of the notion of the Turing Machine. There was also the set theory concept of something or other (I don't recall the name) that ended up also converging to defining the same exact same concept of "computable". All of these converging basically meant it was said and done. That is what we as a society (or even as the human race for that matter) were able to come up with as a notion of what is and is not programmable. However, that is the convergence of possibly over 3000 years of mathematical development possibly beginning as far back in concept as Euclid when he formalized the most basic concepts of theorems and axioms. Sure math existed but it was just a tool. Nobody had a formal notion of it. Things are just obvious and known. If Hypercomputation is possible for humans to do (rather than it just being a thing limited to machines) then all it would take is one genius in the entire history of math to crack that. I'd say it is a reasonable thing for an alternate history.






                              share|improve this answer












                              Hypercomputation



                              According to Wikipedia Hypercomputation is defined to be the following:




                              Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.



                              The Church–Turing thesis states that any "effectively computable" function that can be computed by a mathematician with a pen and paper using a finite set of simple algorithms, can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not effectively computable in the Church–Turing sense.



                              Technically the output of a random Turing machine is uncomputable; however, most hypercomputing literature focuses instead on the computation of useful, rather than random, uncomputable functions.




                              What this means is that Hypercomputation can do things computers cannot do. Not in terms of scope limitations such as the ability to access things on a network but rather what can and cannot be fundamentally solved as a mathematical problem.



                              Consider this. Can a computer store the square root of 2 and operate on it? Well maybe because it could store the coefficients of the polynomial whose solution is that square root and then index the solutions to that polynomial. Alright, so we can the represent so called algebraic numbers (at least I believe so). What about all real numbers? Euler's constant and pi are likely candidates for being unrepresentable in any meaningful sense using binary. We can approximate but we cannot have perfect representations. We could have pi be a special symbol as well as e and just increase the symbolic set. Still not good enough. That's the primary thing that hops to mind to me at least. The ability to digitally compute any real number with perfect precision.



                              This would be a reason for such a society to never discover binary computers being useful. At some point we switched from analog to binary because of electrical needs and signal stuff. I honestly do not know the details. We modeled the modern notion of processor and other things loosely off of the notion of a Turing Machine which was ultimately the form way of discussing computability which was kind of a multi faceted convergence of sorts. There was the idea of something being human computable and then theoretically computable. The rough abstract definition used for many years ended up converging with that of the notion of the Turing Machine. There was also the set theory concept of something or other (I don't recall the name) that ended up also converging to defining the same exact same concept of "computable". All of these converging basically meant it was said and done. That is what we as a society (or even as the human race for that matter) were able to come up with as a notion of what is and is not programmable. However, that is the convergence of possibly over 3000 years of mathematical development possibly beginning as far back in concept as Euclid when he formalized the most basic concepts of theorems and axioms. Sure math existed but it was just a tool. Nobody had a formal notion of it. Things are just obvious and known. If Hypercomputation is possible for humans to do (rather than it just being a thing limited to machines) then all it would take is one genius in the entire history of math to crack that. I'd say it is a reasonable thing for an alternate history.







                              share|improve this answer












                              share|improve this answer



                              share|improve this answer










                              answered Nov 28 at 3:02









                              The Great Duck

                              980411




                              980411






















                                  up vote
                                  1
                                  down vote













                                  Base-10 computing machines were used commercially to control the early telephone switching system. The telephone companies used them because they were solving a base-10 problem. As long as transistors remain larger and more expensive than mechanical relays, then there's no reason for telephone switchboards to switch to binary.



                                  But that's cheating the spirit of the question. Suppose cheap transistors are invented. Then how can a civilization get out of binary computing? Binary logic is the best way to build a electronic deterministic computer with cheap transistors.



                                  Answer: Analog neural networks outperform manually-programmed computers.



                                  Humans are bad at programming computers directly. Manually-programmed computers can perform only simple unambiguous tasks. Statistical programming, also called "machine learning", can answer questions without clear mathematical answers. Machine learning can answer questions like "is this a picture of a frog". Hand-coding an algorithm to determine "is this a picture of a frog" is well beyond the capabilities of human beings. So are more complex tasks like "enforce security at this railroad station" and "take care of my grandmother in her old age".



                                  Manually-programmed software outnumbers neural-network-based software right now, but that might plausibly be just a phase. Manually-programmed software is easier to create. In a few hundred years, neural-network-based software might outnumber manually-programmed so



                                  One of the most promising methods avenues of machine learning involves neural networks, which uses ideas copied from biological brains. If we invent a good general-purpose AI then it might take the form of a neural network, especially if the AI is based off of the human brain.



                                  If you're designing a computer to execute traditional programs then binary is the best way to go. But if the goal of a microchip is to simulate a human brain then it may be inefficient to build a binary computer and then simulate a human brain on it. It might make more sense to build a neural network into hardware directly. The human brain is an analog device, so a microchip based off of the human brain may be an analog device too.



                                  If someone figured out how to build a powerful general-purpose AI as an analog neural network then chips optimized for neural networks may largely replace binary computers.






                                  share|improve this answer

























                                    up vote
                                    1
                                    down vote













                                    Base-10 computing machines were used commercially to control the early telephone switching system. The telephone companies used them because they were solving a base-10 problem. As long as transistors remain larger and more expensive than mechanical relays, then there's no reason for telephone switchboards to switch to binary.



                                    But that's cheating the spirit of the question. Suppose cheap transistors are invented. Then how can a civilization get out of binary computing? Binary logic is the best way to build a electronic deterministic computer with cheap transistors.



                                    Answer: Analog neural networks outperform manually-programmed computers.



                                    Humans are bad at programming computers directly. Manually-programmed computers can perform only simple unambiguous tasks. Statistical programming, also called "machine learning", can answer questions without clear mathematical answers. Machine learning can answer questions like "is this a picture of a frog". Hand-coding an algorithm to determine "is this a picture of a frog" is well beyond the capabilities of human beings. So are more complex tasks like "enforce security at this railroad station" and "take care of my grandmother in her old age".



                                    Manually-programmed software outnumbers neural-network-based software right now, but that might plausibly be just a phase. Manually-programmed software is easier to create. In a few hundred years, neural-network-based software might outnumber manually-programmed so



                                    One of the most promising methods avenues of machine learning involves neural networks, which uses ideas copied from biological brains. If we invent a good general-purpose AI then it might take the form of a neural network, especially if the AI is based off of the human brain.



                                    If you're designing a computer to execute traditional programs then binary is the best way to go. But if the goal of a microchip is to simulate a human brain then it may be inefficient to build a binary computer and then simulate a human brain on it. It might make more sense to build a neural network into hardware directly. The human brain is an analog device, so a microchip based off of the human brain may be an analog device too.



                                    If someone figured out how to build a powerful general-purpose AI as an analog neural network then chips optimized for neural networks may largely replace binary computers.






                                    share|improve this answer























                                      up vote
                                      1
                                      down vote










                                      up vote
                                      1
                                      down vote









                                      Base-10 computing machines were used commercially to control the early telephone switching system. The telephone companies used them because they were solving a base-10 problem. As long as transistors remain larger and more expensive than mechanical relays, then there's no reason for telephone switchboards to switch to binary.



                                      But that's cheating the spirit of the question. Suppose cheap transistors are invented. Then how can a civilization get out of binary computing? Binary logic is the best way to build a electronic deterministic computer with cheap transistors.



                                      Answer: Analog neural networks outperform manually-programmed computers.



                                      Humans are bad at programming computers directly. Manually-programmed computers can perform only simple unambiguous tasks. Statistical programming, also called "machine learning", can answer questions without clear mathematical answers. Machine learning can answer questions like "is this a picture of a frog". Hand-coding an algorithm to determine "is this a picture of a frog" is well beyond the capabilities of human beings. So are more complex tasks like "enforce security at this railroad station" and "take care of my grandmother in her old age".



                                      Manually-programmed software outnumbers neural-network-based software right now, but that might plausibly be just a phase. Manually-programmed software is easier to create. In a few hundred years, neural-network-based software might outnumber manually-programmed so



                                      One of the most promising methods avenues of machine learning involves neural networks, which uses ideas copied from biological brains. If we invent a good general-purpose AI then it might take the form of a neural network, especially if the AI is based off of the human brain.



                                      If you're designing a computer to execute traditional programs then binary is the best way to go. But if the goal of a microchip is to simulate a human brain then it may be inefficient to build a binary computer and then simulate a human brain on it. It might make more sense to build a neural network into hardware directly. The human brain is an analog device, so a microchip based off of the human brain may be an analog device too.



                                      If someone figured out how to build a powerful general-purpose AI as an analog neural network then chips optimized for neural networks may largely replace binary computers.






                                      share|improve this answer












                                      Base-10 computing machines were used commercially to control the early telephone switching system. The telephone companies used them because they were solving a base-10 problem. As long as transistors remain larger and more expensive than mechanical relays, then there's no reason for telephone switchboards to switch to binary.



                                      But that's cheating the spirit of the question. Suppose cheap transistors are invented. Then how can a civilization get out of binary computing? Binary logic is the best way to build a electronic deterministic computer with cheap transistors.



                                      Answer: Analog neural networks outperform manually-programmed computers.



                                      Humans are bad at programming computers directly. Manually-programmed computers can perform only simple unambiguous tasks. Statistical programming, also called "machine learning", can answer questions without clear mathematical answers. Machine learning can answer questions like "is this a picture of a frog". Hand-coding an algorithm to determine "is this a picture of a frog" is well beyond the capabilities of human beings. So are more complex tasks like "enforce security at this railroad station" and "take care of my grandmother in her old age".



                                      Manually-programmed software outnumbers neural-network-based software right now, but that might plausibly be just a phase. Manually-programmed software is easier to create. In a few hundred years, neural-network-based software might outnumber manually-programmed so



                                      One of the most promising methods avenues of machine learning involves neural networks, which uses ideas copied from biological brains. If we invent a good general-purpose AI then it might take the form of a neural network, especially if the AI is based off of the human brain.



                                      If you're designing a computer to execute traditional programs then binary is the best way to go. But if the goal of a microchip is to simulate a human brain then it may be inefficient to build a binary computer and then simulate a human brain on it. It might make more sense to build a neural network into hardware directly. The human brain is an analog device, so a microchip based off of the human brain may be an analog device too.



                                      If someone figured out how to build a powerful general-purpose AI as an analog neural network then chips optimized for neural networks may largely replace binary computers.







                                      share|improve this answer












                                      share|improve this answer



                                      share|improve this answer










                                      answered Nov 27 at 12:20









                                      lsusr

                                      34617




                                      34617






















                                          up vote
                                          1
                                          down vote













                                          One simple change would be to make solid-state electronics impossible. Either your planet doesn't have abundant silicon, or there is some chemical issue which makes it uneconomic to manufacture semiconductors.



                                          Instead, consider what would happen if Charles Babbage's mechanical computer designs (which were intrinsically decimal devices, just like the mechanical calculators which already existed in Babbage's day) were scaled down to nano-engineering size.



                                          The earliest computers used vacuum tube electronics not semiconductors. The basic design of vacuum tube memory circuits was already known by 1920, long before the first computers, but for large scale computer memory tubes would have been prohibitively large, power-hungry, and unreliable. The earliest computers used various alternative systems - some of which were in effect mechanical, not electrical. So the notion of totally mechanical computers does have some relation to actual history.






                                          share|improve this answer

























                                            up vote
                                            1
                                            down vote













                                            One simple change would be to make solid-state electronics impossible. Either your planet doesn't have abundant silicon, or there is some chemical issue which makes it uneconomic to manufacture semiconductors.



                                            Instead, consider what would happen if Charles Babbage's mechanical computer designs (which were intrinsically decimal devices, just like the mechanical calculators which already existed in Babbage's day) were scaled down to nano-engineering size.



                                            The earliest computers used vacuum tube electronics not semiconductors. The basic design of vacuum tube memory circuits was already known by 1920, long before the first computers, but for large scale computer memory tubes would have been prohibitively large, power-hungry, and unreliable. The earliest computers used various alternative systems - some of which were in effect mechanical, not electrical. So the notion of totally mechanical computers does have some relation to actual history.






                                            share|improve this answer























                                              up vote
                                              1
                                              down vote










                                              up vote
                                              1
                                              down vote









                                              One simple change would be to make solid-state electronics impossible. Either your planet doesn't have abundant silicon, or there is some chemical issue which makes it uneconomic to manufacture semiconductors.



                                              Instead, consider what would happen if Charles Babbage's mechanical computer designs (which were intrinsically decimal devices, just like the mechanical calculators which already existed in Babbage's day) were scaled down to nano-engineering size.



                                              The earliest computers used vacuum tube electronics not semiconductors. The basic design of vacuum tube memory circuits was already known by 1920, long before the first computers, but for large scale computer memory tubes would have been prohibitively large, power-hungry, and unreliable. The earliest computers used various alternative systems - some of which were in effect mechanical, not electrical. So the notion of totally mechanical computers does have some relation to actual history.






                                              share|improve this answer












                                              One simple change would be to make solid-state electronics impossible. Either your planet doesn't have abundant silicon, or there is some chemical issue which makes it uneconomic to manufacture semiconductors.



                                              Instead, consider what would happen if Charles Babbage's mechanical computer designs (which were intrinsically decimal devices, just like the mechanical calculators which already existed in Babbage's day) were scaled down to nano-engineering size.



                                              The earliest computers used vacuum tube electronics not semiconductors. The basic design of vacuum tube memory circuits was already known by 1920, long before the first computers, but for large scale computer memory tubes would have been prohibitively large, power-hungry, and unreliable. The earliest computers used various alternative systems - some of which were in effect mechanical, not electrical. So the notion of totally mechanical computers does have some relation to actual history.







                                              share|improve this answer












                                              share|improve this answer



                                              share|improve this answer










                                              answered Nov 28 at 0:04









                                              alephzero

                                              1,59527




                                              1,59527






















                                                  up vote
                                                  1
                                                  down vote













                                                  Morse code rules.



                                                  telegraph
                                                  https://www.kaspersky.com/blog/telegraph-grandpa-of-internet/9034/



                                                  Just as modern keyboards retain the QWERTY of the first typewriters, in your world the trinary code of Morse becomes the language of computers. Computers developed to rapidly send and receive messages naturally use this language to send messages other than written language, and then to communicate between parts of themselves.



                                                  There are apparently technical reasons making binary more efficient. https://www.reddit.com/r/askscience/comments/hmy7w/if_morse_is_more_efficient_than_binary_why_dont/



                                                  I am fairly certain that there would be more efficient setups than QWERTY as well, but now many decades since there was a need for keys to be spatially distant there is still QWERTY. So too Morse in your world. It was always the language of computers, and endures as such.






                                                  share|improve this answer

















                                                  • 1




                                                    Interesting. You could merge this with one of the reasons for ternary mathematics to provide a more robust explanation for ternary computers.
                                                    – kingledion
                                                    Nov 28 at 2:16










                                                  • But Morse code is binary
                                                    – endolith
                                                    Nov 30 at 17:20










                                                  • @endolith - Morse has dot, dash, and space.
                                                    – Willk
                                                    Nov 30 at 21:56










                                                  • @Willk Which are made up of on and off.
                                                    – endolith
                                                    Dec 1 at 7:09










                                                  • @endolith - listen to some morse code. The length of the "on" is what distinguishes dot from dash. Spaces must be put in between, But don't take it from me. Read up: cs.stackexchange.com/questions/39920/…
                                                    – Willk
                                                    Dec 1 at 17:40















                                                  up vote
                                                  1
                                                  down vote













                                                  Morse code rules.



                                                  telegraph
                                                  https://www.kaspersky.com/blog/telegraph-grandpa-of-internet/9034/



                                                  Just as modern keyboards retain the QWERTY of the first typewriters, in your world the trinary code of Morse becomes the language of computers. Computers developed to rapidly send and receive messages naturally use this language to send messages other than written language, and then to communicate between parts of themselves.



                                                  There are apparently technical reasons making binary more efficient. https://www.reddit.com/r/askscience/comments/hmy7w/if_morse_is_more_efficient_than_binary_why_dont/



                                                  I am fairly certain that there would be more efficient setups than QWERTY as well, but now many decades since there was a need for keys to be spatially distant there is still QWERTY. So too Morse in your world. It was always the language of computers, and endures as such.






                                                  share|improve this answer

















                                                  • 1




                                                    Interesting. You could merge this with one of the reasons for ternary mathematics to provide a more robust explanation for ternary computers.
                                                    – kingledion
                                                    Nov 28 at 2:16










                                                  • But Morse code is binary
                                                    – endolith
                                                    Nov 30 at 17:20










                                                  • @endolith - Morse has dot, dash, and space.
                                                    – Willk
                                                    Nov 30 at 21:56










                                                  • @Willk Which are made up of on and off.
                                                    – endolith
                                                    Dec 1 at 7:09










                                                  • @endolith - listen to some morse code. The length of the "on" is what distinguishes dot from dash. Spaces must be put in between, But don't take it from me. Read up: cs.stackexchange.com/questions/39920/…
                                                    – Willk
                                                    Dec 1 at 17:40













                                                  up vote
                                                  1
                                                  down vote










                                                  up vote
                                                  1
                                                  down vote









                                                  Morse code rules.



                                                  telegraph
                                                  https://www.kaspersky.com/blog/telegraph-grandpa-of-internet/9034/



                                                  Just as modern keyboards retain the QWERTY of the first typewriters, in your world the trinary code of Morse becomes the language of computers. Computers developed to rapidly send and receive messages naturally use this language to send messages other than written language, and then to communicate between parts of themselves.



                                                  There are apparently technical reasons making binary more efficient. https://www.reddit.com/r/askscience/comments/hmy7w/if_morse_is_more_efficient_than_binary_why_dont/



                                                  I am fairly certain that there would be more efficient setups than QWERTY as well, but now many decades since there was a need for keys to be spatially distant there is still QWERTY. So too Morse in your world. It was always the language of computers, and endures as such.






                                                  share|improve this answer












                                                  Morse code rules.



                                                  telegraph
                                                  https://www.kaspersky.com/blog/telegraph-grandpa-of-internet/9034/



                                                  Just as modern keyboards retain the QWERTY of the first typewriters, in your world the trinary code of Morse becomes the language of computers. Computers developed to rapidly send and receive messages naturally use this language to send messages other than written language, and then to communicate between parts of themselves.



                                                  There are apparently technical reasons making binary more efficient. https://www.reddit.com/r/askscience/comments/hmy7w/if_morse_is_more_efficient_than_binary_why_dont/



                                                  I am fairly certain that there would be more efficient setups than QWERTY as well, but now many decades since there was a need for keys to be spatially distant there is still QWERTY. So too Morse in your world. It was always the language of computers, and endures as such.







                                                  share|improve this answer












                                                  share|improve this answer



                                                  share|improve this answer










                                                  answered Nov 28 at 2:14









                                                  Willk

                                                  98.6k25190414




                                                  98.6k25190414








                                                  • 1




                                                    Interesting. You could merge this with one of the reasons for ternary mathematics to provide a more robust explanation for ternary computers.
                                                    – kingledion
                                                    Nov 28 at 2:16










                                                  • But Morse code is binary
                                                    – endolith
                                                    Nov 30 at 17:20










                                                  • @endolith - Morse has dot, dash, and space.
                                                    – Willk
                                                    Nov 30 at 21:56










                                                  • @Willk Which are made up of on and off.
                                                    – endolith
                                                    Dec 1 at 7:09










                                                  • @endolith - listen to some morse code. The length of the "on" is what distinguishes dot from dash. Spaces must be put in between, But don't take it from me. Read up: cs.stackexchange.com/questions/39920/…
                                                    – Willk
                                                    Dec 1 at 17:40














                                                  • 1




                                                    Interesting. You could merge this with one of the reasons for ternary mathematics to provide a more robust explanation for ternary computers.
                                                    – kingledion
                                                    Nov 28 at 2:16










                                                  • But Morse code is binary
                                                    – endolith
                                                    Nov 30 at 17:20










                                                  • @endolith - Morse has dot, dash, and space.
                                                    – Willk
                                                    Nov 30 at 21:56










                                                  • @Willk Which are made up of on and off.
                                                    – endolith
                                                    Dec 1 at 7:09










                                                  • @endolith - listen to some morse code. The length of the "on" is what distinguishes dot from dash. Spaces must be put in between, But don't take it from me. Read up: cs.stackexchange.com/questions/39920/…
                                                    – Willk
                                                    Dec 1 at 17:40








                                                  1




                                                  1




                                                  Interesting. You could merge this with one of the reasons for ternary mathematics to provide a more robust explanation for ternary computers.
                                                  – kingledion
                                                  Nov 28 at 2:16




                                                  Interesting. You could merge this with one of the reasons for ternary mathematics to provide a more robust explanation for ternary computers.
                                                  – kingledion
                                                  Nov 28 at 2:16












                                                  But Morse code is binary
                                                  – endolith
                                                  Nov 30 at 17:20




                                                  But Morse code is binary
                                                  – endolith
                                                  Nov 30 at 17:20












                                                  @endolith - Morse has dot, dash, and space.
                                                  – Willk
                                                  Nov 30 at 21:56




                                                  @endolith - Morse has dot, dash, and space.
                                                  – Willk
                                                  Nov 30 at 21:56












                                                  @Willk Which are made up of on and off.
                                                  – endolith
                                                  Dec 1 at 7:09




                                                  @Willk Which are made up of on and off.
                                                  – endolith
                                                  Dec 1 at 7:09












                                                  @endolith - listen to some morse code. The length of the "on" is what distinguishes dot from dash. Spaces must be put in between, But don't take it from me. Read up: cs.stackexchange.com/questions/39920/…
                                                  – Willk
                                                  Dec 1 at 17:40




                                                  @endolith - listen to some morse code. The length of the "on" is what distinguishes dot from dash. Spaces must be put in between, But don't take it from me. Read up: cs.stackexchange.com/questions/39920/…
                                                  – Willk
                                                  Dec 1 at 17:40










                                                  up vote
                                                  1
                                                  down vote













                                                  Oh, man. Although non-binary computers would be extremely inconvenient, I can easily imagine trinary, decimal or even analog computers becoming dominant, due to a terrible force all developers fear: entrenchment, and the need for legacy support. Lots of times in computing history, we've struggled with decisions made long ago, which we just couldn't break free of (for a long time) from sheer inertia. There's a lot of stuff even in modern processors which we never would have chosen, if it wasn't for the need to support existing software and architecture decisions.



                                                  So for your scenario, I imagine that for some reason one type of non-binary computer got a head start. Maybe for many years, computers didn't improve all that much due to some calamity. But software was still written for these weak computers, extremely useful and good software, tons of it. By the time things got going again, it was just much more profitable to focus on making trinary (or whatever) better, rather than trying to redo all the work Ninetel put into their 27-trit processor.



                                                  Sure, there are some weirdos claiming that binary is so much more sensible that it's worth it to make a processor that's BISC (binary instruction set circuit) in the bottom with a trinary emulation layer on top. But after the bankruptcy of Transbita, venture capital has mostly lost interest in these projects.






                                                  share|improve this answer

























                                                    up vote
                                                    1
                                                    down vote













                                                    Oh, man. Although non-binary computers would be extremely inconvenient, I can easily imagine trinary, decimal or even analog computers becoming dominant, due to a terrible force all developers fear: entrenchment, and the need for legacy support. Lots of times in computing history, we've struggled with decisions made long ago, which we just couldn't break free of (for a long time) from sheer inertia. There's a lot of stuff even in modern processors which we never would have chosen, if it wasn't for the need to support existing software and architecture decisions.



                                                    So for your scenario, I imagine that for some reason one type of non-binary computer got a head start. Maybe for many years, computers didn't improve all that much due to some calamity. But software was still written for these weak computers, extremely useful and good software, tons of it. By the time things got going again, it was just much more profitable to focus on making trinary (or whatever) better, rather than trying to redo all the work Ninetel put into their 27-trit processor.



                                                    Sure, there are some weirdos claiming that binary is so much more sensible that it's worth it to make a processor that's BISC (binary instruction set circuit) in the bottom with a trinary emulation layer on top. But after the bankruptcy of Transbita, venture capital has mostly lost interest in these projects.






                                                    share|improve this answer























                                                      up vote
                                                      1
                                                      down vote










                                                      up vote
                                                      1
                                                      down vote









                                                      Oh, man. Although non-binary computers would be extremely inconvenient, I can easily imagine trinary, decimal or even analog computers becoming dominant, due to a terrible force all developers fear: entrenchment, and the need for legacy support. Lots of times in computing history, we've struggled with decisions made long ago, which we just couldn't break free of (for a long time) from sheer inertia. There's a lot of stuff even in modern processors which we never would have chosen, if it wasn't for the need to support existing software and architecture decisions.



                                                      So for your scenario, I imagine that for some reason one type of non-binary computer got a head start. Maybe for many years, computers didn't improve all that much due to some calamity. But software was still written for these weak computers, extremely useful and good software, tons of it. By the time things got going again, it was just much more profitable to focus on making trinary (or whatever) better, rather than trying to redo all the work Ninetel put into their 27-trit processor.



                                                      Sure, there are some weirdos claiming that binary is so much more sensible that it's worth it to make a processor that's BISC (binary instruction set circuit) in the bottom with a trinary emulation layer on top. But after the bankruptcy of Transbita, venture capital has mostly lost interest in these projects.






                                                      share|improve this answer












                                                      Oh, man. Although non-binary computers would be extremely inconvenient, I can easily imagine trinary, decimal or even analog computers becoming dominant, due to a terrible force all developers fear: entrenchment, and the need for legacy support. Lots of times in computing history, we've struggled with decisions made long ago, which we just couldn't break free of (for a long time) from sheer inertia. There's a lot of stuff even in modern processors which we never would have chosen, if it wasn't for the need to support existing software and architecture decisions.



                                                      So for your scenario, I imagine that for some reason one type of non-binary computer got a head start. Maybe for many years, computers didn't improve all that much due to some calamity. But software was still written for these weak computers, extremely useful and good software, tons of it. By the time things got going again, it was just much more profitable to focus on making trinary (or whatever) better, rather than trying to redo all the work Ninetel put into their 27-trit processor.



                                                      Sure, there are some weirdos claiming that binary is so much more sensible that it's worth it to make a processor that's BISC (binary instruction set circuit) in the bottom with a trinary emulation layer on top. But after the bankruptcy of Transbita, venture capital has mostly lost interest in these projects.







                                                      share|improve this answer












                                                      share|improve this answer



                                                      share|improve this answer










                                                      answered Nov 29 at 14:47









                                                      Harald Korneliussen

                                                      1111




                                                      1111






















                                                          up vote
                                                          1
                                                          down vote













                                                          The creatures involved have three fingers, a ternary numeral system in everyday life, and the technical advantages of binary over ternary aren't as great as the advantages (radix economy, etc) of binary over decimal, so they never bothered to use a system other than the one they naturally knew innately?






                                                          share|improve this answer























                                                          • Creatures with three genders! Three parents.
                                                            – Amarth
                                                            2 days ago















                                                          up vote
                                                          1
                                                          down vote













                                                          The creatures involved have three fingers, a ternary numeral system in everyday life, and the technical advantages of binary over ternary aren't as great as the advantages (radix economy, etc) of binary over decimal, so they never bothered to use a system other than the one they naturally knew innately?






                                                          share|improve this answer























                                                          • Creatures with three genders! Three parents.
                                                            – Amarth
                                                            2 days ago













                                                          up vote
                                                          1
                                                          down vote










                                                          up vote
                                                          1
                                                          down vote









                                                          The creatures involved have three fingers, a ternary numeral system in everyday life, and the technical advantages of binary over ternary aren't as great as the advantages (radix economy, etc) of binary over decimal, so they never bothered to use a system other than the one they naturally knew innately?






                                                          share|improve this answer














                                                          The creatures involved have three fingers, a ternary numeral system in everyday life, and the technical advantages of binary over ternary aren't as great as the advantages (radix economy, etc) of binary over decimal, so they never bothered to use a system other than the one they naturally knew innately?







                                                          share|improve this answer














                                                          share|improve this answer



                                                          share|improve this answer








                                                          edited Nov 30 at 17:08

























                                                          answered Nov 29 at 19:01









                                                          endolith

                                                          1135




                                                          1135












                                                          • Creatures with three genders! Three parents.
                                                            – Amarth
                                                            2 days ago


















                                                          • Creatures with three genders! Three parents.
                                                            – Amarth
                                                            2 days ago
















                                                          Creatures with three genders! Three parents.
                                                          – Amarth
                                                          2 days ago




                                                          Creatures with three genders! Three parents.
                                                          – Amarth
                                                          2 days ago










                                                          up vote
                                                          1
                                                          down vote













                                                          They made quantum computing work much more quickly than we have.



                                                          Why have binary state, when you can have infinite?



                                                          They probably had binary computers for a short time, then cracked quantum.




                                                          What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?




                                                          Someone cracked a cheap room temperature way to make qbits



                                                          (ref: https://medium.com/@jackkrupansky/the-greatest-challenges-for-quantum-computing-are-hardware-and-algorithms-c61061fa1210)






                                                          share|improve this answer



























                                                            up vote
                                                            1
                                                            down vote













                                                            They made quantum computing work much more quickly than we have.



                                                            Why have binary state, when you can have infinite?



                                                            They probably had binary computers for a short time, then cracked quantum.




                                                            What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?




                                                            Someone cracked a cheap room temperature way to make qbits



                                                            (ref: https://medium.com/@jackkrupansky/the-greatest-challenges-for-quantum-computing-are-hardware-and-algorithms-c61061fa1210)






                                                            share|improve this answer

























                                                              up vote
                                                              1
                                                              down vote










                                                              up vote
                                                              1
                                                              down vote









                                                              They made quantum computing work much more quickly than we have.



                                                              Why have binary state, when you can have infinite?



                                                              They probably had binary computers for a short time, then cracked quantum.




                                                              What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?




                                                              Someone cracked a cheap room temperature way to make qbits



                                                              (ref: https://medium.com/@jackkrupansky/the-greatest-challenges-for-quantum-computing-are-hardware-and-algorithms-c61061fa1210)






                                                              share|improve this answer














                                                              They made quantum computing work much more quickly than we have.



                                                              Why have binary state, when you can have infinite?



                                                              They probably had binary computers for a short time, then cracked quantum.




                                                              What is the minimal historical change that would make non-binary computers the standard in a world equivalent to our modern world?




                                                              Someone cracked a cheap room temperature way to make qbits



                                                              (ref: https://medium.com/@jackkrupansky/the-greatest-challenges-for-quantum-computing-are-hardware-and-algorithms-c61061fa1210)







                                                              share|improve this answer














                                                              share|improve this answer



                                                              share|improve this answer








                                                              edited Dec 1 at 1:46

























                                                              answered Nov 30 at 3:42









                                                              GreenAsJade

                                                              26717




                                                              26717






















                                                                  up vote
                                                                  1
                                                                  down vote













                                                                  Politically enforced decimal base-10, expressed as binary-coded decimal



                                                                  The most reasonable alternative to the binary computer (which is the most efficient), would be a decimal base 10 one.



                                                                  Suppose a government enforced computers to be decimal, since that system is most natural to humans. Perhaps they early on feared that computers would be restricted to an "elite" who understood binary and hex numbers, and wanted the technology to be accessible to everyone.



                                                                  Same argument as why the computer mouse was invented and became a success: it wasn't because it was faster to use, and certainly not because it was ergonomic. But it was easier to use. Computer history repeats ease of use as an argument: Windows won and became the dominant OS, and so on.





                                                                  A decimal computer could still be possible without changing the way computers work all that much - they would be using binary-coded decimal (BCD). Processors would use different OP codes and data would be stored differently in memories. But otherwise, transistors will still remain on or off. Boolean logic will remain true or false.



                                                                  Data would take up more space and calculations would be slower, but potentially it would be easier for humans to interpret raw data that way.



                                                                  Take for example the decimal number 99. If you just know that binary for 9 is 1001, then you could write 99 with BCD as 1001 1001. This is the way them nerdy binary watches work - they aren't actually using real binary base 2, but BCD, which is easier to read. Otherwise even the nerd would struggle to read the time.



                                                                  To actually express the number 99 in raw binary base 2, it would be 110 0011. Not nearly as readable for humans, though we saved one bit of data storage. To actually read this, a human will have to calculate it in decimal 64 + 32 + 0 + 0 + 0 + 2 + 1 = 99.






                                                                  share|improve this answer








                                                                  New contributor




                                                                  Amarth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                                                  Check out our Code of Conduct.






















                                                                    up vote
                                                                    1
                                                                    down vote













                                                                    Politically enforced decimal base-10, expressed as binary-coded decimal



                                                                    The most reasonable alternative to the binary computer (which is the most efficient), would be a decimal base 10 one.



                                                                    Suppose a government enforced computers to be decimal, since that system is most natural to humans. Perhaps they early on feared that computers would be restricted to an "elite" who understood binary and hex numbers, and wanted the technology to be accessible to everyone.



                                                                    Same argument as why the computer mouse was invented and became a success: it wasn't because it was faster to use, and certainly not because it was ergonomic. But it was easier to use. Computer history repeats ease of use as an argument: Windows won and became the dominant OS, and so on.





                                                                    A decimal computer could still be possible without changing the way computers work all that much - they would be using binary-coded decimal (BCD). Processors would use different OP codes and data would be stored differently in memories. But otherwise, transistors will still remain on or off. Boolean logic will remain true or false.



                                                                    Data would take up more space and calculations would be slower, but potentially it would be easier for humans to interpret raw data that way.



                                                                    Take for example the decimal number 99. If you just know that binary for 9 is 1001, then you could write 99 with BCD as 1001 1001. This is the way them nerdy binary watches work - they aren't actually using real binary base 2, but BCD, which is easier to read. Otherwise even the nerd would struggle to read the time.



                                                                    To actually express the number 99 in raw binary base 2, it would be 110 0011. Not nearly as readable for humans, though we saved one bit of data storage. To actually read this, a human will have to calculate it in decimal 64 + 32 + 0 + 0 + 0 + 2 + 1 = 99.






                                                                    share|improve this answer








                                                                    New contributor




                                                                    Amarth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                                                    Check out our Code of Conduct.




















                                                                      up vote
                                                                      1
                                                                      down vote










                                                                      up vote
                                                                      1
                                                                      down vote









                                                                      Politically enforced decimal base-10, expressed as binary-coded decimal



                                                                      The most reasonable alternative to the binary computer (which is the most efficient), would be a decimal base 10 one.



                                                                      Suppose a government enforced computers to be decimal, since that system is most natural to humans. Perhaps they early on feared that computers would be restricted to an "elite" who understood binary and hex numbers, and wanted the technology to be accessible to everyone.



                                                                      Same argument as why the computer mouse was invented and became a success: it wasn't because it was faster to use, and certainly not because it was ergonomic. But it was easier to use. Computer history repeats ease of use as an argument: Windows won and became the dominant OS, and so on.





                                                                      A decimal computer could still be possible without changing the way computers work all that much - they would be using binary-coded decimal (BCD). Processors would use different OP codes and data would be stored differently in memories. But otherwise, transistors will still remain on or off. Boolean logic will remain true or false.



                                                                      Data would take up more space and calculations would be slower, but potentially it would be easier for humans to interpret raw data that way.



                                                                      Take for example the decimal number 99. If you just know that binary for 9 is 1001, then you could write 99 with BCD as 1001 1001. This is the way them nerdy binary watches work - they aren't actually using real binary base 2, but BCD, which is easier to read. Otherwise even the nerd would struggle to read the time.



                                                                      To actually express the number 99 in raw binary base 2, it would be 110 0011. Not nearly as readable for humans, though we saved one bit of data storage. To actually read this, a human will have to calculate it in decimal 64 + 32 + 0 + 0 + 0 + 2 + 1 = 99.






                                                                      share|improve this answer








                                                                      New contributor




                                                                      Amarth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                                                      Check out our Code of Conduct.









                                                                      Politically enforced decimal base-10, expressed as binary-coded decimal



                                                                      The most reasonable alternative to the binary computer (which is the most efficient), would be a decimal base 10 one.



                                                                      Suppose a government enforced computers to be decimal, since that system is most natural to humans. Perhaps they early on feared that computers would be restricted to an "elite" who understood binary and hex numbers, and wanted the technology to be accessible to everyone.



                                                                      Same argument as why the computer mouse was invented and became a success: it wasn't because it was faster to use, and certainly not because it was ergonomic. But it was easier to use. Computer history repeats ease of use as an argument: Windows won and became the dominant OS, and so on.





                                                                      A decimal computer could still be possible without changing the way computers work all that much - they would be using binary-coded decimal (BCD). Processors would use different OP codes and data would be stored differently in memories. But otherwise, transistors will still remain on or off. Boolean logic will remain true or false.



                                                                      Data would take up more space and calculations would be slower, but potentially it would be easier for humans to interpret raw data that way.



                                                                      Take for example the decimal number 99. If you just know that binary for 9 is 1001, then you could write 99 with BCD as 1001 1001. This is the way them nerdy binary watches work - they aren't actually using real binary base 2, but BCD, which is easier to read. Otherwise even the nerd would struggle to read the time.



                                                                      To actually express the number 99 in raw binary base 2, it would be 110 0011. Not nearly as readable for humans, though we saved one bit of data storage. To actually read this, a human will have to calculate it in decimal 64 + 32 + 0 + 0 + 0 + 2 + 1 = 99.







                                                                      share|improve this answer








                                                                      New contributor




                                                                      Amarth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                                                      Check out our Code of Conduct.









                                                                      share|improve this answer



                                                                      share|improve this answer






                                                                      New contributor




                                                                      Amarth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                                                      Check out our Code of Conduct.









                                                                      answered 2 days ago









                                                                      Amarth

                                                                      1213




                                                                      1213




                                                                      New contributor




                                                                      Amarth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                                                      Check out our Code of Conduct.





                                                                      New contributor





                                                                      Amarth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                                                      Check out our Code of Conduct.






                                                                      Amarth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                                                      Check out our Code of Conduct.






















                                                                          up vote
                                                                          0
                                                                          down vote













                                                                          Binary computers are simply the most efficient ones, claims to the contrary nonwithstanding (they are based on pretty exotic assumptions, the linked claim that "the future lies in analog computers" is even hilariously wrong though I can see where Ulmann comes from).

                                                                          Binary computers are simply the most cost and space efficient option if the technology is based on transistors: A ternary computer would require more transistors than a binary one to store or process the same amount of information. The reason is that electrically, the distinction between "zero volts" and "five volts" is more like "anything below 2.0 volts" and "anything above 3.0 volts", which is much easier to control than ternary voltage levels such as "below 1.0 volts, between 2.0 and 3.0 volts, or between 4.0 and 5.0 volts". Yes you need the gaps between the voltage bands, because you need to deal with imprecisions due to noise (manufacturing spread and electric imprecisions); and yes the gaps are pretty large because the larger the gap, the more variance in the integrated circuits is inconsequential and the better is your yield (which is THE most important parameter of submicron manufacturing).



                                                                          How to get around this?

                                                                          Either change the driving parameters. In an economy where efficiency isn't even remotely relevant, you can choose convenience. Such an economy will instantly collapse as soon as it touches a more efficient one, so this requires either an isolated economy (Soviet-style or even North-Korea-style, though it requires some extra creativity to design a world economy where a massively less efficient economy isn't voted down on feet - historically this was enforced by oppressive regimes, it might be possible that the people stay at a lower level of income and goods for other reasons).

                                                                          Or claim basic components that are better at being trinary than transistors. Somebody with a better background in microelectronics than me might be able to propose something that sounds credible, or maybe something that isn't based on classic electrical currents: quantum devices, maybe, or something photonic.



                                                                          Why is this not done much in literature?

                                                                          Because, ultimately, it does not matter much whether you have a bits or trits. Either way, you bunch together as many of them as you need to represent N decimal digits. Software engineers don't care much, unless they are the ones that write the basic algorithms for addition/subtraction/etc., or the ones that write the algorithms that needs to be fast (i.e. those that deal with large amounts of data, whether it's a huge list of addresses, or the pixels on the screen).

                                                                          Some accidental numbers would likely change. Bunching 8 bits into a byte is helpful because 8 is a power of 2, that's why 256 (2^8) tends to pop up in the number of screen colors and various other occasions. With trinary computers, you'd likely use trytes, nine trits, giving 19683 values. HDR would be much later or not happen at all because RGB would already have more color nuances, so there would be some nonobvious differences.



                                                                          You can simply make it a background fact, never highlight it, just to avoid the explanation.

                                                                          Which begs the counter-question: What's the plot device you need trinary for?






                                                                          share|improve this answer

















                                                                          • 1




                                                                            Your answer boils down to: "Your world isn't interesting", which isn't very helpful
                                                                            – pipe
                                                                            Nov 27 at 10:03






                                                                          • 2




                                                                            "Bunching 8 bits into a byte is helpful because 8 is a power of 2" - which doesn't explain why many computers (even up to the 1980s) did not have 8 bits in a byte. Word lengths of 12, 14 and 18 bits were used, and later bigger numbers including 48 and 60 bits (divided into ten 6-bit "characters").
                                                                            – alephzero
                                                                            Nov 28 at 0:08












                                                                          • @alephzero The drive towards 8-bit bytes isn't a very strong one, admittedly. But eventually it did converge towards 8-bit bytes. Maybe the actual drive was that it was barely enough to keep an ASCII character, and that drive played out in times where you wouldn't want to "waste" an extra byte, and the idea to support multiple character sets was a non-issue because the Internet didn't exist yet. Still, I'm pretty sure some bit fiddling critically depends on being the bit count be a power of two... though I'd have trouble finding such an algorithm, admittedly.
                                                                            – toolforger
                                                                            Nov 28 at 7:13















                                                                          up vote
                                                                          0
                                                                          down vote













                                                                          Binary computers are simply the most efficient ones, claims to the contrary nonwithstanding (they are based on pretty exotic assumptions, the linked claim that "the future lies in analog computers" is even hilariously wrong though I can see where Ulmann comes from).

                                                                          Binary computers are simply the most cost and space efficient option if the technology is based on transistors: A ternary computer would require more transistors than a binary one to store or process the same amount of information. The reason is that electrically, the distinction between "zero volts" and "five volts" is more like "anything below 2.0 volts" and "anything above 3.0 volts", which is much easier to control than ternary voltage levels such as "below 1.0 volts, between 2.0 and 3.0 volts, or between 4.0 and 5.0 volts". Yes you need the gaps between the voltage bands, because you need to deal with imprecisions due to noise (manufacturing spread and electric imprecisions); and yes the gaps are pretty large because the larger the gap, the more variance in the integrated circuits is inconsequential and the better is your yield (which is THE most important parameter of submicron manufacturing).



                                                                          How to get around this?

                                                                          Either change the driving parameters. In an economy where efficiency isn't even remotely relevant, you can choose convenience. Such an economy will instantly collapse as soon as it touches a more efficient one, so this requires either an isolated economy (Soviet-style or even North-Korea-style, though it requires some extra creativity to design a world economy where a massively less efficient economy isn't voted down on feet - historically this was enforced by oppressive regimes, it might be possible that the people stay at a lower level of income and goods for other reasons).

                                                                          Or claim basic components that are better at being trinary than transistors. Somebody with a better background in microelectronics than me might be able to propose something that sounds credible, or maybe something that isn't based on classic electrical currents: quantum devices, maybe, or something photonic.



                                                                          Why is this not done much in literature?

                                                                          Because, ultimately, it does not matter much whether you have a bits or trits. Either way, you bunch together as many of them as you need to represent N decimal digits. Software engineers don't care much, unless they are the ones that write the basic algorithms for addition/subtraction/etc., or the ones that write the algorithms that needs to be fast (i.e. those that deal with large amounts of data, whether it's a huge list of addresses, or the pixels on the screen).

                                                                          Some accidental numbers would likely change. Bunching 8 bits into a byte is helpful because 8 is a power of 2, that's why 256 (2^8) tends to pop up in the number of screen colors and various other occasions. With trinary computers, you'd likely use trytes, nine trits, giving 19683 values. HDR would be much later or not happen at all because RGB would already have more color nuances, so there would be some nonobvious differences.



                                                                          You can simply make it a background fact, never highlight it, just to avoid the explanation.

                                                                          Which begs the counter-question: What's the plot device you need trinary for?






                                                                          share|improve this answer

















                                                                          • 1




                                                                            Your answer boils down to: "Your world isn't interesting", which isn't very helpful
                                                                            – pipe
                                                                            Nov 27 at 10:03






                                                                          • 2




                                                                            "Bunching 8 bits into a byte is helpful because 8 is a power of 2" - which doesn't explain why many computers (even up to the 1980s) did not have 8 bits in a byte. Word lengths of 12, 14 and 18 bits were used, and later bigger numbers including 48 and 60 bits (divided into ten 6-bit "characters").
                                                                            – alephzero
                                                                            Nov 28 at 0:08












                                                                          • @alephzero The drive towards 8-bit bytes isn't a very strong one, admittedly. But eventually it did converge towards 8-bit bytes. Maybe the actual drive was that it was barely enough to keep an ASCII character, and that drive played out in times where you wouldn't want to "waste" an extra byte, and the idea to support multiple character sets was a non-issue because the Internet didn't exist yet. Still, I'm pretty sure some bit fiddling critically depends on being the bit count be a power of two... though I'd have trouble finding such an algorithm, admittedly.
                                                                            – toolforger
                                                                            Nov 28 at 7:13













                                                                          up vote
                                                                          0
                                                                          down vote










                                                                          up vote
                                                                          0
                                                                          down vote









                                                                          Binary computers are simply the most efficient ones, claims to the contrary nonwithstanding (they are based on pretty exotic assumptions, the linked claim that "the future lies in analog computers" is even hilariously wrong though I can see where Ulmann comes from).

                                                                          Binary computers are simply the most cost and space efficient option if the technology is based on transistors: A ternary computer would require more transistors than a binary one to store or process the same amount of information. The reason is that electrically, the distinction between "zero volts" and "five volts" is more like "anything below 2.0 volts" and "anything above 3.0 volts", which is much easier to control than ternary voltage levels such as "below 1.0 volts, between 2.0 and 3.0 volts, or between 4.0 and 5.0 volts". Yes you need the gaps between the voltage bands, because you need to deal with imprecisions due to noise (manufacturing spread and electric imprecisions); and yes the gaps are pretty large because the larger the gap, the more variance in the integrated circuits is inconsequential and the better is your yield (which is THE most important parameter of submicron manufacturing).



                                                                          How to get around this?

                                                                          Either change the driving parameters. In an economy where efficiency isn't even remotely relevant, you can choose convenience. Such an economy will instantly collapse as soon as it touches a more efficient one, so this requires either an isolated economy (Soviet-style or even North-Korea-style, though it requires some extra creativity to design a world economy where a massively less efficient economy isn't voted down on feet - historically this was enforced by oppressive regimes, it might be possible that the people stay at a lower level of income and goods for other reasons).

                                                                          Or claim basic components that are better at being trinary than transistors. Somebody with a better background in microelectronics than me might be able to propose something that sounds credible, or maybe something that isn't based on classic electrical currents: quantum devices, maybe, or something photonic.



                                                                          Why is this not done much in literature?

                                                                          Because, ultimately, it does not matter much whether you have a bits or trits. Either way, you bunch together as many of them as you need to represent N decimal digits. Software engineers don't care much, unless they are the ones that write the basic algorithms for addition/subtraction/etc., or the ones that write the algorithms that needs to be fast (i.e. those that deal with large amounts of data, whether it's a huge list of addresses, or the pixels on the screen).

                                                                          Some accidental numbers would likely change. Bunching 8 bits into a byte is helpful because 8 is a power of 2, that's why 256 (2^8) tends to pop up in the number of screen colors and various other occasions. With trinary computers, you'd likely use trytes, nine trits, giving 19683 values. HDR would be much later or not happen at all because RGB would already have more color nuances, so there would be some nonobvious differences.



                                                                          You can simply make it a background fact, never highlight it, just to avoid the explanation.

                                                                          Which begs the counter-question: What's the plot device you need trinary for?






                                                                          share|improve this answer












                                                                          Binary computers are simply the most efficient ones, claims to the contrary nonwithstanding (they are based on pretty exotic assumptions, the linked claim that "the future lies in analog computers" is even hilariously wrong though I can see where Ulmann comes from).

                                                                          Binary computers are simply the most cost and space efficient option if the technology is based on transistors: A ternary computer would require more transistors than a binary one to store or process the same amount of information. The reason is that electrically, the distinction between "zero volts" and "five volts" is more like "anything below 2.0 volts" and "anything above 3.0 volts", which is much easier to control than ternary voltage levels such as "below 1.0 volts, between 2.0 and 3.0 volts, or between 4.0 and 5.0 volts". Yes you need the gaps between the voltage bands, because you need to deal with imprecisions due to noise (manufacturing spread and electric imprecisions); and yes the gaps are pretty large because the larger the gap, the more variance in the integrated circuits is inconsequential and the better is your yield (which is THE most important parameter of submicron manufacturing).



                                                                          How to get around this?

                                                                          Either change the driving parameters. In an economy where efficiency isn't even remotely relevant, you can choose convenience. Such an economy will instantly collapse as soon as it touches a more efficient one, so this requires either an isolated economy (Soviet-style or even North-Korea-style, though it requires some extra creativity to design a world economy where a massively less efficient economy isn't voted down on feet - historically this was enforced by oppressive regimes, it might be possible that the people stay at a lower level of income and goods for other reasons).

                                                                          Or claim basic components that are better at being trinary than transistors. Somebody with a better background in microelectronics than me might be able to propose something that sounds credible, or maybe something that isn't based on classic electrical currents: quantum devices, maybe, or something photonic.



                                                                          Why is this not done much in literature?

                                                                          Because, ultimately, it does not matter much whether you have a bits or trits. Either way, you bunch together as many of them as you need to represent N decimal digits. Software engineers don't care much, unless they are the ones that write the basic algorithms for addition/subtraction/etc., or the ones that write the algorithms that needs to be fast (i.e. those that deal with large amounts of data, whether it's a huge list of addresses, or the pixels on the screen).

                                                                          Some accidental numbers would likely change. Bunching 8 bits into a byte is helpful because 8 is a power of 2, that's why 256 (2^8) tends to pop up in the number of screen colors and various other occasions. With trinary computers, you'd likely use trytes, nine trits, giving 19683 values. HDR would be much later or not happen at all because RGB would already have more color nuances, so there would be some nonobvious differences.



                                                                          You can simply make it a background fact, never highlight it, just to avoid the explanation.

                                                                          Which begs the counter-question: What's the plot device you need trinary for?







                                                                          share|improve this answer












                                                                          share|improve this answer



                                                                          share|improve this answer










                                                                          answered Nov 27 at 6:49









                                                                          toolforger

                                                                          471




                                                                          471








                                                                          • 1




                                                                            Your answer boils down to: "Your world isn't interesting", which isn't very helpful
                                                                            – pipe
                                                                            Nov 27 at 10:03






                                                                          • 2




                                                                            "Bunching 8 bits into a byte is helpful because 8 is a power of 2" - which doesn't explain why many computers (even up to the 1980s) did not have 8 bits in a byte. Word lengths of 12, 14 and 18 bits were used, and later bigger numbers including 48 and 60 bits (divided into ten 6-bit "characters").
                                                                            – alephzero
                                                                            Nov 28 at 0:08












                                                                          • @alephzero The drive towards 8-bit bytes isn't a very strong one, admittedly. But eventually it did converge towards 8-bit bytes. Maybe the actual drive was that it was barely enough to keep an ASCII character, and that drive played out in times where you wouldn't want to "waste" an extra byte, and the idea to support multiple character sets was a non-issue because the Internet didn't exist yet. Still, I'm pretty sure some bit fiddling critically depends on being the bit count be a power of two... though I'd have trouble finding such an algorithm, admittedly.
                                                                            – toolforger
                                                                            Nov 28 at 7:13














                                                                          • 1




                                                                            Your answer boils down to: "Your world isn't interesting", which isn't very helpful
                                                                            – pipe
                                                                            Nov 27 at 10:03






                                                                          • 2




                                                                            "Bunching 8 bits into a byte is helpful because 8 is a power of 2" - which doesn't explain why many computers (even up to the 1980s) did not have 8 bits in a byte. Word lengths of 12, 14 and 18 bits were used, and later bigger numbers including 48 and 60 bits (divided into ten 6-bit "characters").
                                                                            – alephzero
                                                                            Nov 28 at 0:08












                                                                          • @alephzero The drive towards 8-bit bytes isn't a very strong one, admittedly. But eventually it did converge towards 8-bit bytes. Maybe the actual drive was that it was barely enough to keep an ASCII character, and that drive played out in times where you wouldn't want to "waste" an extra byte, and the idea to support multiple character sets was a non-issue because the Internet didn't exist yet. Still, I'm pretty sure some bit fiddling critically depends on being the bit count be a power of two... though I'd have trouble finding such an algorithm, admittedly.
                                                                            – toolforger
                                                                            Nov 28 at 7:13








                                                                          1




                                                                          1




                                                                          Your answer boils down to: "Your world isn't interesting", which isn't very helpful
                                                                          – pipe
                                                                          Nov 27 at 10:03




                                                                          Your answer boils down to: "Your world isn't interesting", which isn't very helpful
                                                                          – pipe
                                                                          Nov 27 at 10:03




                                                                          2




                                                                          2




                                                                          "Bunching 8 bits into a byte is helpful because 8 is a power of 2" - which doesn't explain why many computers (even up to the 1980s) did not have 8 bits in a byte. Word lengths of 12, 14 and 18 bits were used, and later bigger numbers including 48 and 60 bits (divided into ten 6-bit "characters").
                                                                          – alephzero
                                                                          Nov 28 at 0:08






                                                                          "Bunching 8 bits into a byte is helpful because 8 is a power of 2" - which doesn't explain why many computers (even up to the 1980s) did not have 8 bits in a byte. Word lengths of 12, 14 and 18 bits were used, and later bigger numbers including 48 and 60 bits (divided into ten 6-bit "characters").
                                                                          – alephzero
                                                                          Nov 28 at 0:08














                                                                          @alephzero The drive towards 8-bit bytes isn't a very strong one, admittedly. But eventually it did converge towards 8-bit bytes. Maybe the actual drive was that it was barely enough to keep an ASCII character, and that drive played out in times where you wouldn't want to "waste" an extra byte, and the idea to support multiple character sets was a non-issue because the Internet didn't exist yet. Still, I'm pretty sure some bit fiddling critically depends on being the bit count be a power of two... though I'd have trouble finding such an algorithm, admittedly.
                                                                          – toolforger
                                                                          Nov 28 at 7:13




                                                                          @alephzero The drive towards 8-bit bytes isn't a very strong one, admittedly. But eventually it did converge towards 8-bit bytes. Maybe the actual drive was that it was barely enough to keep an ASCII character, and that drive played out in times where you wouldn't want to "waste" an extra byte, and the idea to support multiple character sets was a non-issue because the Internet didn't exist yet. Still, I'm pretty sure some bit fiddling critically depends on being the bit count be a power of two... though I'd have trouble finding such an algorithm, admittedly.
                                                                          – toolforger
                                                                          Nov 28 at 7:13










                                                                          up vote
                                                                          0
                                                                          down vote













                                                                          Cryptocurrency scammers having convinced sufficiently many big corporations and governments to become partners in their pyramid scheme that economies of scale make their inefficient and ridiculous ternary-logic hardware cheaper than properly-designed computers.






                                                                          share|improve this answer

























                                                                            up vote
                                                                            0
                                                                            down vote













                                                                            Cryptocurrency scammers having convinced sufficiently many big corporations and governments to become partners in their pyramid scheme that economies of scale make their inefficient and ridiculous ternary-logic hardware cheaper than properly-designed computers.






                                                                            share|improve this answer























                                                                              up vote
                                                                              0
                                                                              down vote










                                                                              up vote
                                                                              0
                                                                              down vote









                                                                              Cryptocurrency scammers having convinced sufficiently many big corporations and governments to become partners in their pyramid scheme that economies of scale make their inefficient and ridiculous ternary-logic hardware cheaper than properly-designed computers.






                                                                              share|improve this answer












                                                                              Cryptocurrency scammers having convinced sufficiently many big corporations and governments to become partners in their pyramid scheme that economies of scale make their inefficient and ridiculous ternary-logic hardware cheaper than properly-designed computers.







                                                                              share|improve this answer












                                                                              share|improve this answer



                                                                              share|improve this answer










                                                                              answered Nov 28 at 20:28









                                                                              R..

                                                                              38637




                                                                              38637






















                                                                                  up vote
                                                                                  -1
                                                                                  down vote













                                                                                  The strength of binary is that it's fundamentally a yes/no logic system, the weakness of binary is that it is fundamentally a yes/no logic system, you need multiple layers of logic to create "yes and" statements with binary logic. The smallest change you would need to make to change away from binary (in terms of having the rest of the world being the same but computing being different) would be to have the people who pioneered the science of computers, particularly Turing (thanks @Renan) aim for, and demand, more complex arrays of basic logic outcomes (a, b, c, etc... vary combinations, all of the above, none of the above). Complex outcome options require more complex inputs, more complex logic gates and a more complex programming language: consequently computers will be more expensive, more delicate, and harder to program.



                                                                                  A few people might mess around with binary for really basic machines, like pocket calculators, but true computers will be more complex machines.






                                                                                  share|improve this answer



















                                                                                  • 1




                                                                                    You are looking for Alan Turing. He is the one who introduced binarism into computing, when describing the Turing machine.
                                                                                    – Renan
                                                                                    Nov 26 at 14:34










                                                                                  • @Renan Thanks that's what I thought but I couldn't remember if he got that from someone earlier or not.
                                                                                    – Ash
                                                                                    Nov 26 at 14:36










                                                                                  • Playing around with complex arrays still needs each element of the array to have a defined state. If transistors are used then the state would almost certainly still be represented in binary at some level. The equivalence of different types of universal computer has been proven.
                                                                                    – chasly from UK
                                                                                    Nov 26 at 14:44












                                                                                  • @chaslyfromUK Yes if transistors are used binary is mechanical inherent, but if transistor logic is insufficient to satisfy the fundamental philosophical and mechanical goals of the people building the system they can't be used, different circuitry will be required.
                                                                                    – Ash
                                                                                    Nov 26 at 14:53












                                                                                  • @Renan: You have a strange misconception about Turing machines. A Turing machine is defined by the alphabet (set of symbols) on the tape, the set of internal states, and the rules for state transition. It has nothing to do with numbers, binary or otherwise.
                                                                                    – AlexP
                                                                                    Nov 27 at 11:06















                                                                                  up vote
                                                                                  -1
                                                                                  down vote













                                                                                  The strength of binary is that it's fundamentally a yes/no logic system, the weakness of binary is that it is fundamentally a yes/no logic system, you need multiple layers of logic to create "yes and" statements with binary logic. The smallest change you would need to make to change away from binary (in terms of having the rest of the world being the same but computing being different) would be to have the people who pioneered the science of computers, particularly Turing (thanks @Renan) aim for, and demand, more complex arrays of basic logic outcomes (a, b, c, etc... vary combinations, all of the above, none of the above). Complex outcome options require more complex inputs, more complex logic gates and a more complex programming language: consequently computers will be more expensive, more delicate, and harder to program.



                                                                                  A few people might mess around with binary for really basic machines, like pocket calculators, but true computers will be more complex machines.






                                                                                  share|improve this answer



















                                                                                  • 1




                                                                                    You are looking for Alan Turing. He is the one who introduced binarism into computing, when describing the Turing machine.
                                                                                    – Renan
                                                                                    Nov 26 at 14:34










                                                                                  • @Renan Thanks that's what I thought but I couldn't remember if he got that from someone earlier or not.
                                                                                    – Ash
                                                                                    Nov 26 at 14:36










                                                                                  • Playing around with complex arrays still needs each element of the array to have a defined state. If transistors are used then the state would almost certainly still be represented in binary at some level. The equivalence of different types of universal computer has been proven.
                                                                                    – chasly from UK
                                                                                    Nov 26 at 14:44












                                                                                  • @chaslyfromUK Yes if transistors are used binary is mechanical inherent, but if transistor logic is insufficient to satisfy the fundamental philosophical and mechanical goals of the people building the system they can't be used, different circuitry will be required.
                                                                                    – Ash
                                                                                    Nov 26 at 14:53












                                                                                  • @Renan: You have a strange misconception about Turing machines. A Turing machine is defined by the alphabet (set of symbols) on the tape, the set of internal states, and the rules for state transition. It has nothing to do with numbers, binary or otherwise.
                                                                                    – AlexP
                                                                                    Nov 27 at 11:06













                                                                                  up vote
                                                                                  -1
                                                                                  down vote










                                                                                  up vote
                                                                                  -1
                                                                                  down vote









                                                                                  The strength of binary is that it's fundamentally a yes/no logic system, the weakness of binary is that it is fundamentally a yes/no logic system, you need multiple layers of logic to create "yes and" statements with binary logic. The smallest change you would need to make to change away from binary (in terms of having the rest of the world being the same but computing being different) would be to have the people who pioneered the science of computers, particularly Turing (thanks @Renan) aim for, and demand, more complex arrays of basic logic outcomes (a, b, c, etc... vary combinations, all of the above, none of the above). Complex outcome options require more complex inputs, more complex logic gates and a more complex programming language: consequently computers will be more expensive, more delicate, and harder to program.



                                                                                  A few people might mess around with binary for really basic machines, like pocket calculators, but true computers will be more complex machines.






                                                                                  share|improve this answer














                                                                                  The strength of binary is that it's fundamentally a yes/no logic system, the weakness of binary is that it is fundamentally a yes/no logic system, you need multiple layers of logic to create "yes and" statements with binary logic. The smallest change you would need to make to change away from binary (in terms of having the rest of the world being the same but computing being different) would be to have the people who pioneered the science of computers, particularly Turing (thanks @Renan) aim for, and demand, more complex arrays of basic logic outcomes (a, b, c, etc... vary combinations, all of the above, none of the above). Complex outcome options require more complex inputs, more complex logic gates and a more complex programming language: consequently computers will be more expensive, more delicate, and harder to program.



                                                                                  A few people might mess around with binary for really basic machines, like pocket calculators, but true computers will be more complex machines.







                                                                                  share|improve this answer














                                                                                  share|improve this answer



                                                                                  share|improve this answer








                                                                                  edited Nov 26 at 14:35

























                                                                                  answered Nov 26 at 14:29









                                                                                  Ash

                                                                                  26.1k465144




                                                                                  26.1k465144








                                                                                  • 1




                                                                                    You are looking for Alan Turing. He is the one who introduced binarism into computing, when describing the Turing machine.
                                                                                    – Renan
                                                                                    Nov 26 at 14:34










                                                                                  • @Renan Thanks that's what I thought but I couldn't remember if he got that from someone earlier or not.
                                                                                    – Ash
                                                                                    Nov 26 at 14:36










                                                                                  • Playing around with complex arrays still needs each element of the array to have a defined state. If transistors are used then the state would almost certainly still be represented in binary at some level. The equivalence of different types of universal computer has been proven.
                                                                                    – chasly from UK
                                                                                    Nov 26 at 14:44












                                                                                  • @chaslyfromUK Yes if transistors are used binary is mechanical inherent, but if transistor logic is insufficient to satisfy the fundamental philosophical and mechanical goals of the people building the system they can't be used, different circuitry will be required.
                                                                                    – Ash
                                                                                    Nov 26 at 14:53












                                                                                  • @Renan: You have a strange misconception about Turing machines. A Turing machine is defined by the alphabet (set of symbols) on the tape, the set of internal states, and the rules for state transition. It has nothing to do with numbers, binary or otherwise.
                                                                                    – AlexP
                                                                                    Nov 27 at 11:06














                                                                                  • 1




                                                                                    You are looking for Alan Turing. He is the one who introduced binarism into computing, when describing the Turing machine.
                                                                                    – Renan
                                                                                    Nov 26 at 14:34










                                                                                  • @Renan Thanks that's what I thought but I couldn't remember if he got that from someone earlier or not.
                                                                                    – Ash
                                                                                    Nov 26 at 14:36










                                                                                  • Playing around with complex arrays still needs each element of the array to have a defined state. If transistors are used then the state would almost certainly still be represented in binary at some level. The equivalence of different types of universal computer has been proven.
                                                                                    – chasly from UK
                                                                                    Nov 26 at 14:44












                                                                                  • @chaslyfromUK Yes if transistors are used binary is mechanical inherent, but if transistor logic is insufficient to satisfy the fundamental philosophical and mechanical goals of the people building the system they can't be used, different circuitry will be required.
                                                                                    – Ash
                                                                                    Nov 26 at 14:53












                                                                                  • @Renan: You have a strange misconception about Turing machines. A Turing machine is defined by the alphabet (set of symbols) on the tape, the set of internal states, and the rules for state transition. It has nothing to do with numbers, binary or otherwise.
                                                                                    – AlexP
                                                                                    Nov 27 at 11:06








                                                                                  1




                                                                                  1




                                                                                  You are looking for Alan Turing. He is the one who introduced binarism into computing, when describing the Turing machine.
                                                                                  – Renan
                                                                                  Nov 26 at 14:34




                                                                                  You are looking for Alan Turing. He is the one who introduced binarism into computing, when describing the Turing machine.
                                                                                  – Renan
                                                                                  Nov 26 at 14:34












                                                                                  @Renan Thanks that's what I thought but I couldn't remember if he got that from someone earlier or not.
                                                                                  – Ash
                                                                                  Nov 26 at 14:36




                                                                                  @Renan Thanks that's what I thought but I couldn't remember if he got that from someone earlier or not.
                                                                                  – Ash
                                                                                  Nov 26 at 14:36












                                                                                  Playing around with complex arrays still needs each element of the array to have a defined state. If transistors are used then the state would almost certainly still be represented in binary at some level. The equivalence of different types of universal computer has been proven.
                                                                                  – chasly from UK
                                                                                  Nov 26 at 14:44






                                                                                  Playing around with complex arrays still needs each element of the array to have a defined state. If transistors are used then the state would almost certainly still be represented in binary at some level. The equivalence of different types of universal computer has been proven.
                                                                                  – chasly from UK
                                                                                  Nov 26 at 14:44














                                                                                  @chaslyfromUK Yes if transistors are used binary is mechanical inherent, but if transistor logic is insufficient to satisfy the fundamental philosophical and mechanical goals of the people building the system they can't be used, different circuitry will be required.
                                                                                  – Ash
                                                                                  Nov 26 at 14:53






                                                                                  @chaslyfromUK Yes if transistors are used binary is mechanical inherent, but if transistor logic is insufficient to satisfy the fundamental philosophical and mechanical goals of the people building the system they can't be used, different circuitry will be required.
                                                                                  – Ash
                                                                                  Nov 26 at 14:53














                                                                                  @Renan: You have a strange misconception about Turing machines. A Turing machine is defined by the alphabet (set of symbols) on the tape, the set of internal states, and the rules for state transition. It has nothing to do with numbers, binary or otherwise.
                                                                                  – AlexP
                                                                                  Nov 27 at 11:06




                                                                                  @Renan: You have a strange misconception about Turing machines. A Turing machine is defined by the alphabet (set of symbols) on the tape, the set of internal states, and the rules for state transition. It has nothing to do with numbers, binary or otherwise.
                                                                                  – AlexP
                                                                                  Nov 27 at 11:06


















                                                                                  draft saved

                                                                                  draft discarded




















































                                                                                  Thanks for contributing an answer to Worldbuilding Stack Exchange!


                                                                                  • Please be sure to answer the question. Provide details and share your research!

                                                                                  But avoid



                                                                                  • Asking for help, clarification, or responding to other answers.

                                                                                  • Making statements based on opinion; back them up with references or personal experience.


                                                                                  Use MathJax to format equations. MathJax reference.


                                                                                  To learn more, see our tips on writing great answers.





                                                                                  Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                                                                                  Please pay close attention to the following guidance:


                                                                                  • Please be sure to answer the question. Provide details and share your research!

                                                                                  But avoid



                                                                                  • Asking for help, clarification, or responding to other answers.

                                                                                  • Making statements based on opinion; back them up with references or personal experience.


                                                                                  To learn more, see our tips on writing great answers.




                                                                                  draft saved


                                                                                  draft discarded














                                                                                  StackExchange.ready(
                                                                                  function () {
                                                                                  StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fworldbuilding.stackexchange.com%2fquestions%2f131296%2fwhat-is-the-most-reasonable-way-for-non-binary-computers-to-have-become-standard%23new-answer', 'question_page');
                                                                                  }
                                                                                  );

                                                                                  Post as a guest















                                                                                  Required, but never shown





















































                                                                                  Required, but never shown














                                                                                  Required, but never shown












                                                                                  Required, but never shown







                                                                                  Required, but never shown

































                                                                                  Required, but never shown














                                                                                  Required, but never shown












                                                                                  Required, but never shown







                                                                                  Required, but never shown







                                                                                  Popular posts from this blog

                                                                                  Сан-Квентин

                                                                                  8-я гвардейская общевойсковая армия

                                                                                  Алькесар