Bitcoin Proof-of-work difficulty increasing(2009.12.30--2010.8.26)
2010 Feb 5
See all posts
Bitcoin Proof-of-work difficulty increasing(2009.12.30--2010.8.26) @ Satoshi Nakamoto
- Author
-
Satoshi Nakamoto
- Email
-
satoshinakamotonetwork@proton.me
- Site
-
https://satoshinakamoto.network
Satoshi Nakamoto
Proof-of-work difficulty increasing
February 05, 2010, 07:19:12 PM
We had our first automatic adjustment of the proof-of-work difficulty
on 30 Dec 2009.
The minimum difficulty is 32 zero bits, so even if only one person
was running a node, the difficulty doesn't get any easier than that.
For most of last year, we were hovering below the minimum. On 30 Dec
we broke above it and the algorithm adjusted to more difficulty. It's
been getting more difficult at each adjustment since then.
The adjustment on 04 Feb took it up from 1.34 times last year's
difficulty to 1.82 times more difficult than last year. That means you
generate only 55% as many coins for the same amount of work.
The difficulty adjusts proportionally to the total effort across the
network. If the number of nodes doubles, the difficulty will also
double, returning the total generated to the target rate.
For those technically inclined, the proof-of-work difficulty can be
seen by searching on "target:" in debug.log. It's a 256-bit unsigned
hex number, which the SHA-256 value has to be less than to successfully
generate a block. It gets adjusted every 2016 blocks, typically two
weeks. That's when it prints "GetNextWorkRequired RETARGET" in
debug.log.
minimum |
00000000ffff0000000000000000000000000000000000000000000000000000 |
30/12/2009 |
00000000d86a0000000000000000000000000000000000000000000000000000 |
11/01/2010 |
00000000c4280000000000000000000000000000000000000000000000000000 |
25/01/2010 |
00000000be710000000000000000000000000000000000000000000000000000 |
04/02/2010 |
000000008cc30000000000000000000000000000000000000000000000000000 |
14/02/2010 |
0000000065465700000000000000000000000000000000000000000000000000 |
24/02/2010 |
0000000043b3e500000000000000000000000000000000000000000000000000 |
08/03/2010 |
00000000387f6f00000000000000000000000000000000000000000000000000 |
21/03/2010 |
0000000038137500000000000000000000000000000000000000000000000000 |
01/04/2010 |
000000002a111500000000000000000000000000000000000000000000000000 |
12/04/2010 |
0000000020bca700000000000000000000000000000000000000000000000000 |
21/04/2010 |
0000000016546f00000000000000000000000000000000000000000000000000 |
04/05/2010 |
0000000013ec5300000000000000000000000000000000000000000000000000 |
19/05/2010 |
00000000159c2400000000000000000000000000000000000000000000000000 |
29/05/2010 |
000000000f675c00000000000000000000000000000000000000000000000000 |
11/06/2010 |
000000000eba6400000000000000000000000000000000000000000000000000 |
24/06/2010 |
000000000d314200000000000000000000000000000000000000000000000000 |
06/07/2010 |
000000000ae49300000000000000000000000000000000000000000000000000 |
13/07/2010 |
0000000005a3f400000000000000000000000000000000000000000000000000 |
16/07/2010 |
000000000168fd00000000000000000000000000000000000000000000000000 |
27/07/2010 |
00000000010c5a00000000000000000000000000000000000000000000000000 |
05/08/2010 |
0000000000ba1800000000000000000000000000000000000000000000000000 |
15/08/2010 |
0000000000800e00000000000000000000000000000000000000000000000000 |
26/08/2010 |
0000000000692000000000000000000000000000000000000000000000000000 |
2009 |
1.00 |
|
30/12/2009 |
1.18 |
+18% |
11/01/2010 |
1.31 |
+11% |
25/01/2010 |
1.34 |
+2% |
04/02/2010 |
1.82 |
+36% |
14/02/2010 |
2.53 |
+39% |
24/02/2010 |
3.78 |
+49% |
08/03/2010 |
4.53 |
+20% |
21/03/2010 |
4.57 |
+9% |
01/04/2010 |
6.09 |
+33% |
12/04/2010 |
7.82 |
+28% |
21/04/2010 |
11.46 |
+47% |
04/05/2010 |
12.85 |
+12% |
19/05/2010 |
11.85 |
-8% |
29/05/2010 |
16.62 |
+40% |
11/06/2010 |
17.38 |
+5% |
24/06/2010 |
19.41 |
+12% |
06/07/2010 |
23.50 |
+21% |
13/07/2010 |
45.38 |
+93% |
16/07/2010 |
181.54 |
+300% |
27/07/2010 |
244.21 |
+35% |
05/08/2010 |
352.17 |
+44% |
15/08/2010 |
511.77 |
+45% |
26/08/2010 |
623.39 |
+22% |
Satoshi Nakamoto
February 15, 2010, 06:28:38 AM
14/02/2010 |
0000000065465700000000000000000000000000000000000000000000000000 |
2009 |
1.00 |
|
30/12/2009 |
1.18 |
+18% |
11/01/2010 |
1.31 |
+11% |
25/01/2010 |
1.34 |
+2% |
04/02/2010 |
1.82 |
+36% |
14/02/2010 |
2.53 |
+39% |
Another big jump in difficulty yesterday from 1.82 times to 2.53
times, a 39% increase since 10 days ago. It was 10 days apart not 14
because more nodes joined and generated the 2016 blocks in less
time.
Suggester
February 16, 2010, 02:15:49 AM
[Edit: I later found that I was generating quite a bit more than
that, just didn't realize it because of the "matures in xx more blocks"
concept. I still think it will be a major headache when the difficulty
significantly increases though. I apologize for my silliness ]
Satoshi, I figured it will take my modern core 2 duo about 20 hours
of nonstop work to create ฿50.00! With older PCs it will take forever.
People like to feel that they "own" something as soon as possible, is
there a way to make the generation more divisible? So say, instead of
making ฿50 every 20 hours, make ฿5 every 2 hours?
I don't know if that means reducing the block size or reducing the
120-block threshold to say 12-block only or what, but because the
difficulty is increasing I can imagine that a year from now the
situation will be even worse (3+ weeks until you see the first spendable
coins!) and we better find a solution for this ASAP.
Sabunir
February 16, 2010, 05:18:30 AM
I would like to comment that as of late, it seems almost as if I am
generating nearly no Bitcoins. Indeed, my rate of acquisition seems to
be greater than ten times slower. If I cannot stay online for about
fourteen consecutive hours (very hard to do on a satellite connection!),
I actually get nothing at all.
How this exactly relates to the difficulty adjustments is beyond my
knowledge; I offer this feedback as a kind of "field report".
theymos
February 16, 2010, 06:01:51 AM
I generated 5 blocks today on my Pentium processor. Two of them were
within 3 minutes of each other.
I have noticed some slowdown since the adjustment, but I still
generate a lot of coins. My computer is off while I'm sleeping, and
BitCoin bootstraps quickly when I turn it back on. Do you
guys-who-are-having-trouble have the BitCoin port open?
Sabunir
February 16, 2010, 08:51:51 AM
My port is open, both in my software and hardware firewall. My router
is handling it appropriately. Perhaps it has to do with my connection's
very high latency (2000ms or more on average) and/or my high packet loss
(sometimes up to 10% loss)?
Satoshi Nakamoto
February 16, 2010, 05:36:40 PM
Quote from: Suggester on February 16, 2010, 02:15:49 AM
Satoshi, I figured it will take my modern core 2 duo about 20 hours
of nonstop work to create ฿50.00! With older PCs it will take forever.
People like to feel that they "own" something as soon as possible, is
there a way to make the generation more divisible? So say, instead of
making ฿50 every 20 hours, make ฿ every 2 hours?
I thought about that but there wasn't a practical way to do smaller
increments. The frequency of block generation is balanced between
confirming transactions as fast as possible and the latency of the
network.
The algorithm aims for an average of 6 blocks per hour. If it was 5
bc and 60 per hour, there would be 10 times as many blocks and the
initial block download would take 10 times as long. It wouldn't work
anyway because that would be only 1 minute average between blocks, too
close to the broadcast latency when the network gets larger.
Suggester
February 17, 2010, 01:28:27 AM
Quote from: Sabunir on February 16, 2010, 05:18:30 AM
If I cannot stay online for about fourteen consecutive hours (very
hard to do on a satellite connection!), I actually get nothing at
all.
Can Satoshi confirm whether the computations your machine had made
carries on if the session was interrupted, or do you need to start all
over if you disconnected before generating at least one block? If it
carries on, maybe a little meter indicating the % left until your block
completes can be a nice addition so people would have some hope
(actually, it will be a nice addition anyway whether the computations
get carried on after disconnection or not!)
Quote from: theymos on February 16, 2010, 06:01:51 AM
I generated 5 blocks today on my Pentium processor. Two of them were
within 3 minutes of each other.
Ok, I just realized that I didn't understand how Bitcoin worked to
begin with. The blocks get generated anyway whether you're generating
coins or not. The average amount of creation conformed what I observed
before (120/20 hrs, or 6/hr). This has got absolutely nothing to do with
your CPU power, it's constant for all practical purposes. The CPU power
determines the "transactions" that get created and "matures in xx
blocks". My head just got a bit bigger now
This also means theymos that there was probably a coincidence or
error for your 3-minute interval observation!
Satoshi Nakamoto
February 17, 2010, 05:58:03 PM
Quote from: Sabunir on February 16, 2010, 08:51:51 AM
. Perhaps it has to do with my connection's very high latency (2000ms
or more on average)
2 seconds of latency in both directions should reduce your generation
success by less than 1%.
Quote from: Sabunir on February 16, 2010, 08:51:51 AM
and/or my high packet loss (sometimes up to 10% loss)?
Probably OK, but I'm not sure. The protocol is designed to resync to
the next message, and messages get re-requested from all the other nodes
you're connected to until received. If you miss a block, it'll also keep
requesting it every time another blocks comes in and it sees there's a
gap. Before the original release I did a test dropping 1 out of 4 random
messages under heavy load until I could run it overnight without any
nodes getting stuck.
Sabunir
February 21, 2010, 04:58:44 PM
How do you adjust this difficulty, anyway? (Administrating a
decentralized system?) And what would prevent an attacker from setting
the difficulty very low or very high to interfere with the system?
NewLibertyStandard
February 21, 2010, 06:52:43 PM
Quote from: Sabunir on February 21, 2010, 04:58:44 PM
How do you adjust this difficulty, anyway? (Administrating a
decentralized system?) And what would prevent an attacker from setting
the difficulty very low or very high to interfere with the system?
My understanding is that every Bitcoin client has the same algorithm
(formula) built into it to automatically adjust the difficulty every so
many blocks. Not only that, but I think that Bitcoin will not accept
blocks generated at a different difficulty, so if a modified Bitcoin
client tried to send out more easily generated blocks, all the authentic
clients would reject the fake blocks.
Satoshi Nakamoto
February 24, 2010, 10:42:24 PM
The automatic adjustment happened earlier today.
24/02/2010 |
0000000043b3e500000000000000000000000000000000000000000000000000 |
I updated the first post.
Suggester
February 25, 2010, 04:34:59 AM
Quote from: NewLibertyStandard on February 21, 2010, 06:52:43 PM
Quote from: Sabunir on February 21, 2010, 04:58:44 PM
How do you adjust this difficulty, anyway? (Administrating a
decentralized system?) And what would prevent an attacker from setting
the difficulty very low or very high to interfere with the system?
My understanding is that every Bitcoin client has the same algorithm
(formula) built into it to automatically adjust the difficulty every so
many blocks.
Then how is it dependent on how many CPU's are connected to the whole
network?
Quote from: NewLibertyStandard on February 21, 2010, 06:52:43 PM
Not only that, but I think that Bitcoin will not accept blocks
generated at a different difficulty, so if a modified Bitcoin client
tried to send out more easily generated blocks, all the authentic
clients would reject the fake blocks.
We need Satoshi to confirm that because clients accept blocks
generated at easier difficulties all the time whenever the PoW's
difficulty increases.
Satoshi Nakamoto
February 25, 2010, 11:06:29 PM
The formula is based on the time it takes to generate 2016 blocks.
The difficulty is multiplied by 14/(actual days taken). For instance,
this time it took 9.4 days, so the calculation was 14/9.4 = 1.49.
Previous difficulty 2.53 * 1.49 = 3.78, a 49% increase.
I don't know what you're talking about accepting easier
difficulties.
Suggester
February 26, 2010, 01:35:08 AM
Quote from: satoshi on February 25, 2010, 11:06:29 PM
I don't know what you're talking about accepting easier
difficulties.
We were essentially discussing Sabunir's question about what prevents
someone from messing with the program's source code to adjust
block-generating difficulty to be very easy, then make a network on his
own and create a, say, 50,000-block proof-of-work within seconds then
finally propagate it across the real network to steal "votes" towards
his new fake blocks as technically, his proof would be "the longest". So
is there a way to verify how much work was actually put into a given PoW
(for eg. how many zero's are at the beginning of each hash or
something)?
Quote from: satoshi on February 16, 2010, 05:36:40 PM
It wouldn't work anyway because that would be only 1 minute average
between blocks, too close to the broadcast latency when the network gets
larger.
Since we're at it, what's the approximate time for proof-of-work
propagation across a network of about 100,000 machines? Is there a way
to optimize connections so that broadcasting is done via a pyramid-form
to minimize the needed time? For example, the block creator sends it to
10 nodes, then those 10 send it to a 100 provided that none of those 100
were among the original 11, then those 100 tell a 1000 provided that
none of those 1000 were among the original 111, etc to save time.
Legion
February 26, 2010, 06:44:40 AM
This overclocked i7 still hasn't generated any keys after 8
hours...
NewLibertyStandard
February 26, 2010, 07:03:09 AM
Quote from: Legion on February 26, 2010, 06:44:40 AM
This overclocked i7 still hasn't generated any keys after 8
hours...
It may take longer than 8 hours to generate a block.
Have you previously generated bitcoins? Are the number of blocks
listed at the bottom of Bitcoin greater than 42650? Those need to
download before it can start generating coins. How many connections are
listed at the bottom of Bitcoin? Did you click Options > Generate
Coins? How much CPU does your process viewer show that Bitcoin is using?
Is your Internet connection steady? I had problems when I tried sharing
Internet from my smartphone to my computer.
Legion
February 26, 2010, 08:57:41 AM
Quote from: NewLibertyStandard on February 26, 2010, 07:03:09 AM
Quote from: Legion on February 26, 2010, 06:44:40 AM
This overclocked i7 still hasn't generated any keys after 8
hours...
It may take longer than 8 hours to generate a block.
Have you previously generated bitcoins? Are the number of blocks
listed at the bottom of Bitcoin greater than 42650? Those need to
download before it can start generating coins. How many connections are
listed at the bottom of Bitcoin? Did you click Options > Generate
Coins? How much CPU does your process viewer show that Bitcoin is using?
Is your Internet connection steady? I had problems when I tried sharing
Internet from my smartphone to my computer.
No, but..42663 blocks..8 connections..and yes, generating. Bitcoin
uses 50-80 CPU..but it only has access to two cores until I bump the VM
it is in to 4 cores..Operating over tor by the way.
NewLibertyStandard
February 26, 2010, 09:19:24 AM
I think that no bitcoins generated in 8 hours from within a VM
utilizing two modern cores is probably not unusual. Keep it running for
a few days and I expect that you'll generate more than a few packs of
bitcoins.
Legion
February 26, 2010, 10:09:19 PM
I wonder what I could generate with all eight threads...
dmp1ce
May 02, 2010, 05:46:13 PM
Quote from: Suggester on February 26, 2010, 01:35:08 AM
Quote from: satoshi on February 25, 2010, 11:06:29 PM
I don't know what you're talking about accepting easier
difficulties.
We were essentially discussing Sabunir's question about what prevents
someone from messing with the program's source code to adjust
block-generating difficulty to be very easy, then make a network on his
own and create a, say, 50,000-block proof-of-work within seconds then
finally propagate it across the real network to steal "votes" towards
his new fake blocks as technically, his proof would be "the longest". So
is there a way to verify how much work was actually put into a given PoW
(for eg. how many zero's are at the beginning of each hash or
something)?
I am also wondering about Suggester's question. It seems like
modifying the code to give a node an advantage in generating coins might
be possible.
I am confused as to why each node on the network is actually doing
when set to generate coins. What problem are they solving that takes
100% CPU?
theymos
May 02, 2010, 09:03:51 PM
Your CPU is creating SHA-256 hashes. It's not possible to cheat: if
the hashes you create are invalid, no one else in the network will
accept them. If you inject a 50,000-block chain of "easy blocks" into
the network, everyone will immediately see that the hash for the first
block in the chain is above the current target and ignore it and every
block derived from it.
fergalish
May 11, 2010, 12:12:08 PM
Interestingly, using laszlo's mac os version of bitcoin, one can see
how many hashes per second the computer is performing. I'm currently
getting about 1 million hashes per second. Given the current difficulty
0000000013ec53, I'll have to perform about 235~3x1010 hashes
before I have a decent chance of getting one below the target, and at
10^6/s, that should take about 30000 sec, or about two per day. The
actual interval varies a lot - it's a random process, but that seems to
be more-or-less the correct amount.
Satoshi, could you update the first post in this thread, with the
complete history of difficulty-of-work increases please? I'd try, but
for some reason, I've lost my logfiles. Fortunately the wallet is
safe.
laszlo
May 11, 2010, 01:13:07 PM
Maybe someone with a little background in this statistics/math stuff
can shed some light on this..
The way this thing works is it takes a (basically random) block of
data and alters a 32 bit field inside it by starting at 1 and
incrementing. The block of data also contains a timestamp and that's
incremented occasionally just to keep mixing it up (but the incrementing
field isn't restarted when the timestamp is update). If you get a new
block from the network you sort of end up having to start over with the
incrementing field at 1 again.. however all the other data changed too
so it's not the same thing you're hashing anyway.
The way I understand it, since the data that's being hashed is pretty
much random and because the hashing algorithm exhibits the ‘avalanche
effect' it probably doesn't matter if you keep starting with 1 and
incrementing it or if you use pseudo random values instead, but I was
wondering if anyone could support this or disprove it.
Can you increase your likelihood of finding a low numerical value
hash by doing something other than just sequentially incrementing that
piece of data in the input? Or is this equivalent to trying to increase
your chances of rolling a 6 (with dice) by using your other hand?
D҉ataWraith
May 11, 2010, 09:50:51 PM
Quote from: laszlo on May 11, 2010, 01:13:07 PM
The way I understand it, since the data that's being hashed is pretty
much random and because the hashing algorithm exhibits the ‘avalanche
effect' it probably doesn't matter if you keep starting with 1 and
incrementing it or if you use pseudo random values instead, but I was
wondering if anyone could support this or disprove it.
Yep, your understanding here is correct. It does not matter what
exactly gets hashed, and no, you can't cheat without first breaking
SHA-256, which is considered difficult.
The salient property of cryptographic hash functions is that they are
as random as is possible while still being deterministic. That's what
their strength depends on – after all if they weren't random, if there
were obvious patterns, they could be broken that way. So the ideal hash
function behaves just like a random number generator. It does not matter
what you feed in, timestamp or not, whatever's put in there, the hash
should still behave randomly (i.e. every possible outcome has the same
a-priori probability of occuring). Incrementing by one works just as
well as completely changing everything every step (this follows from the
avalanche property). However, the initial value, before you start
incrementing, must be (pseudo-)randomly chosen, or every computer will
start at the same point, and the fastest one always wins, which is not
what is wanted here.
teppy
June 02, 2010, 02:27:45 PM
A nice addition to the GUI would be an estimate of how many
hashes/sec it's computing. Either present this as a raw number or a "you
can expect to generate X packs of bitcoins per week."
This might partially solve the frustration of new users not getting
any Bitcoins right away.
Satoshi Nakamoto
June 02, 2010, 06:45:38 PM
That's a good idea. I'm not sure where exactly to fit that in, but it
could certainly calculate the expected average time between blocks
generated, and then people would know what to expect.
Every node and each processor has a different public key in its
block, so they're guaranteed to be scanning different territory.
Whenever the 32-bit nonce starts over at 1, bnExtraNonce gets
incremented, which is an arbitrary precision integer.
Bitcoin Proof-of-work difficulty increasing(2009.12.30--2010.8.26)
2010 Feb 5 See all postsSatoshi Nakamoto
satoshinakamotonetwork@proton.me
https://satoshinakamoto.network
We had our first automatic adjustment of the proof-of-work difficulty on 30 Dec 2009.
The minimum difficulty is 32 zero bits, so even if only one person was running a node, the difficulty doesn't get any easier than that. For most of last year, we were hovering below the minimum. On 30 Dec we broke above it and the algorithm adjusted to more difficulty. It's been getting more difficult at each adjustment since then.
The adjustment on 04 Feb took it up from 1.34 times last year's difficulty to 1.82 times more difficult than last year. That means you generate only 55% as many coins for the same amount of work.
The difficulty adjusts proportionally to the total effort across the network. If the number of nodes doubles, the difficulty will also double, returning the total generated to the target rate.
For those technically inclined, the proof-of-work difficulty can be seen by searching on "target:" in debug.log. It's a 256-bit unsigned hex number, which the SHA-256 value has to be less than to successfully generate a block. It gets adjusted every 2016 blocks, typically two weeks. That's when it prints "GetNextWorkRequired RETARGET" in debug.log.
Another big jump in difficulty yesterday from 1.82 times to 2.53 times, a 39% increase since 10 days ago. It was 10 days apart not 14 because more nodes joined and generated the 2016 blocks in less time.
[Edit: I later found that I was generating quite a bit more than that, just didn't realize it because of the "matures in xx more blocks" concept. I still think it will be a major headache when the difficulty significantly increases though. I apologize for my silliness ]
Satoshi, I figured it will take my modern core 2 duo about 20 hours of nonstop work to create ฿50.00! With older PCs it will take forever. People like to feel that they "own" something as soon as possible, is there a way to make the generation more divisible? So say, instead of making ฿50 every 20 hours, make ฿5 every 2 hours?
I don't know if that means reducing the block size or reducing the 120-block threshold to say 12-block only or what, but because the difficulty is increasing I can imagine that a year from now the situation will be even worse (3+ weeks until you see the first spendable coins!) and we better find a solution for this ASAP.
I would like to comment that as of late, it seems almost as if I am generating nearly no Bitcoins. Indeed, my rate of acquisition seems to be greater than ten times slower. If I cannot stay online for about fourteen consecutive hours (very hard to do on a satellite connection!), I actually get nothing at all.
How this exactly relates to the difficulty adjustments is beyond my knowledge; I offer this feedback as a kind of "field report".
I generated 5 blocks today on my Pentium processor. Two of them were within 3 minutes of each other.
I have noticed some slowdown since the adjustment, but I still generate a lot of coins. My computer is off while I'm sleeping, and BitCoin bootstraps quickly when I turn it back on. Do you guys-who-are-having-trouble have the BitCoin port open?
My port is open, both in my software and hardware firewall. My router is handling it appropriately. Perhaps it has to do with my connection's very high latency (2000ms or more on average) and/or my high packet loss (sometimes up to 10% loss)?
I thought about that but there wasn't a practical way to do smaller increments. The frequency of block generation is balanced between confirming transactions as fast as possible and the latency of the network.
The algorithm aims for an average of 6 blocks per hour. If it was 5 bc and 60 per hour, there would be 10 times as many blocks and the initial block download would take 10 times as long. It wouldn't work anyway because that would be only 1 minute average between blocks, too close to the broadcast latency when the network gets larger.
Can Satoshi confirm whether the computations your machine had made carries on if the session was interrupted, or do you need to start all over if you disconnected before generating at least one block? If it carries on, maybe a little meter indicating the % left until your block completes can be a nice addition so people would have some hope (actually, it will be a nice addition anyway whether the computations get carried on after disconnection or not!)
Ok, I just realized that I didn't understand how Bitcoin worked to begin with. The blocks get generated anyway whether you're generating coins or not. The average amount of creation conformed what I observed before (120/20 hrs, or 6/hr). This has got absolutely nothing to do with your CPU power, it's constant for all practical purposes. The CPU power determines the "transactions" that get created and "matures in xx blocks". My head just got a bit bigger now
This also means theymos that there was probably a coincidence or error for your 3-minute interval observation!
2 seconds of latency in both directions should reduce your generation success by less than 1%.
Probably OK, but I'm not sure. The protocol is designed to resync to the next message, and messages get re-requested from all the other nodes you're connected to until received. If you miss a block, it'll also keep requesting it every time another blocks comes in and it sees there's a gap. Before the original release I did a test dropping 1 out of 4 random messages under heavy load until I could run it overnight without any nodes getting stuck.
How do you adjust this difficulty, anyway? (Administrating a decentralized system?) And what would prevent an attacker from setting the difficulty very low or very high to interfere with the system?
My understanding is that every Bitcoin client has the same algorithm (formula) built into it to automatically adjust the difficulty every so many blocks. Not only that, but I think that Bitcoin will not accept blocks generated at a different difficulty, so if a modified Bitcoin client tried to send out more easily generated blocks, all the authentic clients would reject the fake blocks.
The automatic adjustment happened earlier today.
I updated the first post.
Then how is it dependent on how many CPU's are connected to the whole network?
We need Satoshi to confirm that because clients accept blocks generated at easier difficulties all the time whenever the PoW's difficulty increases.
The formula is based on the time it takes to generate 2016 blocks. The difficulty is multiplied by 14/(actual days taken). For instance, this time it took 9.4 days, so the calculation was 14/9.4 = 1.49. Previous difficulty 2.53 * 1.49 = 3.78, a 49% increase.
I don't know what you're talking about accepting easier difficulties.
We were essentially discussing Sabunir's question about what prevents someone from messing with the program's source code to adjust block-generating difficulty to be very easy, then make a network on his own and create a, say, 50,000-block proof-of-work within seconds then finally propagate it across the real network to steal "votes" towards his new fake blocks as technically, his proof would be "the longest". So is there a way to verify how much work was actually put into a given PoW (for eg. how many zero's are at the beginning of each hash or something)?
Since we're at it, what's the approximate time for proof-of-work propagation across a network of about 100,000 machines? Is there a way to optimize connections so that broadcasting is done via a pyramid-form to minimize the needed time? For example, the block creator sends it to 10 nodes, then those 10 send it to a 100 provided that none of those 100 were among the original 11, then those 100 tell a 1000 provided that none of those 1000 were among the original 111, etc to save time.
This overclocked i7 still hasn't generated any keys after 8 hours...
It may take longer than 8 hours to generate a block.
Have you previously generated bitcoins? Are the number of blocks listed at the bottom of Bitcoin greater than 42650? Those need to download before it can start generating coins. How many connections are listed at the bottom of Bitcoin? Did you click Options > Generate Coins? How much CPU does your process viewer show that Bitcoin is using? Is your Internet connection steady? I had problems when I tried sharing Internet from my smartphone to my computer.
No, but..42663 blocks..8 connections..and yes, generating. Bitcoin uses 50-80 CPU..but it only has access to two cores until I bump the VM it is in to 4 cores..Operating over tor by the way.
I think that no bitcoins generated in 8 hours from within a VM utilizing two modern cores is probably not unusual. Keep it running for a few days and I expect that you'll generate more than a few packs of bitcoins.
I wonder what I could generate with all eight threads...
I am also wondering about Suggester's question. It seems like modifying the code to give a node an advantage in generating coins might be possible.
I am confused as to why each node on the network is actually doing when set to generate coins. What problem are they solving that takes 100% CPU?
Your CPU is creating SHA-256 hashes. It's not possible to cheat: if the hashes you create are invalid, no one else in the network will accept them. If you inject a 50,000-block chain of "easy blocks" into the network, everyone will immediately see that the hash for the first block in the chain is above the current target and ignore it and every block derived from it.
Interestingly, using laszlo's mac os version of bitcoin, one can see how many hashes per second the computer is performing. I'm currently getting about 1 million hashes per second. Given the current difficulty 0000000013ec53, I'll have to perform about 235~3x1010 hashes before I have a decent chance of getting one below the target, and at 10^6/s, that should take about 30000 sec, or about two per day. The actual interval varies a lot - it's a random process, but that seems to be more-or-less the correct amount.
Satoshi, could you update the first post in this thread, with the complete history of difficulty-of-work increases please? I'd try, but for some reason, I've lost my logfiles. Fortunately the wallet is safe.
Maybe someone with a little background in this statistics/math stuff can shed some light on this..
The way this thing works is it takes a (basically random) block of data and alters a 32 bit field inside it by starting at 1 and incrementing. The block of data also contains a timestamp and that's incremented occasionally just to keep mixing it up (but the incrementing field isn't restarted when the timestamp is update). If you get a new block from the network you sort of end up having to start over with the incrementing field at 1 again.. however all the other data changed too so it's not the same thing you're hashing anyway.
The way I understand it, since the data that's being hashed is pretty much random and because the hashing algorithm exhibits the ‘avalanche effect' it probably doesn't matter if you keep starting with 1 and incrementing it or if you use pseudo random values instead, but I was wondering if anyone could support this or disprove it.
Can you increase your likelihood of finding a low numerical value hash by doing something other than just sequentially incrementing that piece of data in the input? Or is this equivalent to trying to increase your chances of rolling a 6 (with dice) by using your other hand?
Yep, your understanding here is correct. It does not matter what exactly gets hashed, and no, you can't cheat without first breaking SHA-256, which is considered difficult.
The salient property of cryptographic hash functions is that they are as random as is possible while still being deterministic. That's what their strength depends on – after all if they weren't random, if there were obvious patterns, they could be broken that way. So the ideal hash function behaves just like a random number generator. It does not matter what you feed in, timestamp or not, whatever's put in there, the hash should still behave randomly (i.e. every possible outcome has the same a-priori probability of occuring). Incrementing by one works just as well as completely changing everything every step (this follows from the avalanche property). However, the initial value, before you start incrementing, must be (pseudo-)randomly chosen, or every computer will start at the same point, and the fastest one always wins, which is not what is wanted here.
A nice addition to the GUI would be an estimate of how many hashes/sec it's computing. Either present this as a raw number or a "you can expect to generate X packs of bitcoins per week."
This might partially solve the frustration of new users not getting any Bitcoins right away.
That's a good idea. I'm not sure where exactly to fit that in, but it could certainly calculate the expected average time between blocks generated, and then people would know what to expect.
Every node and each processor has a different public key in its block, so they're guaranteed to be scanning different territory.
Whenever the 32-bit nonce starts over at 1, bnExtraNonce gets incremented, which is an arbitrary precision integer.