What is a kilobyte? To some it maybe 1000 bytes and others it maybe 1024 bytes. Why the difference? which one is correct? The computer world seems to be confused as to which convention to use. On one side there is the original JEDEC standard which specifies that 1024 bytes are in a kilobyte and 1024 kilobytes are in a megabyte etc. Then on the other there is the IEC standard which states that 1000 bytes are in a kilobyte and 1000 kilobytes are in a megabyte etc. What are the difference between these two standards and their practical uses.
Personally I follow the JEDEC standard because as a programmer it makes much more sense to me. If we look at numbering systems we will see the pattern that for each place value it is a power of the original base value. For example the first few place values of binary are 1,2,4,8,16,32, etc. If we take the smallest unit for typical computers (the bit) which is also base2 we can extend the pattern and see that 1024 is a place value. Which means using 1024 would be nice as it lines up when working with bits. The IEC standard on the other hand uses 1000 and isn't found as a place value in binary which will lead to inconvenient alignment issues.
Well then why do we have the IEC standard at all? I personally believe its because of marketing. When you buy a new hard drive which is say 1TB you are expecting to get the whole 1TB but on some operating systems like windows you might be confused to find that you only get about 931.323GB. This is because the labeling on hard drives use the IEC standard and windows uses the JEDEC standard. For marketing reasons the IEC standard is used on hard drive labels because 1000 definitely looks better than 931.323 and ignorant people just go with the flow. The IEC standard also accounts for the programmers by simply renaming kilo to kibi and and mega to mebi etc although I'd argue that they did so pointlessly. I do not see any real reason to make such a conversion. I also do not see why we have to cater to the consumer which have no idea how the technology works. File systems for instance and hard drive allocation tables under the hood goes by powers of 2. You might find that sector size of your hard drive is 512 bytes and event the Advanced format is at 4096Bytes, because it is power of 2 you will never have a sector that is 1000bytes in length or 4000bytes in length which will miss-align later once you get to kilobyte on the IEC standard (using kilo, mega, giga etc). Computers simply aren't base 10, and there simply isn't 10 bits to a byte. Maybe if there were 10 bits to a byte we can officially and nicely move over to IEC (and get rid of those weird kibi, mebi naming schemes).
Which is completely possible. The size of the byte as we know it now is 8bits but the byte has been 5bits, 6bits, 7bits in the past. Bytes have even gone higher than 8 bits with some systems using 20bits to a byte. The number of bits to a byte these days though have some preferences such as the number of bits have to be a multiple of 2, so essentially it has to be an even number of bits, and 10bits satisfies that. So until I personally see 10bit bytes I don't see any reason to use a standard other than the JEDEC standard. Both are correct but only one makes practical sense to me.