last week i decided to volunteer at head headstart是什么意思.

ISSUU - 01/09/15 - Williston Herald by Wick Communications
01/09/15 - Williston Herald
01/09/15 - Williston HeraldEffects of a volunteer tutor program on self-esteem and basic skills achievement...and so I decided to do something about it.
...and so I decided to do something about it.
SortOldest firstNewest firstHighest rated posts first
Volunteer tester
I've come to the conclusion that cheating and redundancy might not have a solution per se.
There's been enough b&tching and moaning in the various threads on the topic, and I'm so tired or reading the same drivel over and over again that, and so I decided to do something about it: To come up with a way of detecting if a cheat has ocurred.
Moreover, it has a side-effect that can let Berkeley's servers decide now much redundancy is need on a WU-by-WU basis.
.o0(Go make some coffee... this'll be a while)
Let's start with calculating credit - It's stuff that we in the know already know (duh!), but just in case a few crunchers don't, it's based on the processing power of the host, the time needed to crunch, the number of CPUs in the box, and a hypothetical machine's output.
Here's the formula:
( [i]f[/i] + [i]i[/i] )
[i][b]c[/b][/i] = [i]t[/i] * ( ----- ) * ( ----- )
)...where f is the number of floating point operations per second, i is the number of integer operations per second, t is the amount of time needed to crunch a workunit, and c is the calculated credit in cobblestones.
Note that f and i are in operations per second, and not in MIPS (as Whetstone and Dhrystone report when you benchmark).
The second term on the right, (100 & 86400), is the arbitrary cobblestone benchmark.
A machine that processes exactly one thousand million floating point and one thousand million integer operations per second would, at the end of one full day of crunching, earn 100 cobblestones.
So far, so good, until you throw multiprocessing machines into the mix:
Are f and i calculated per CPU, or are they per host?
For the purposes of this post, I'm going to assume that all hosts have only one CPU (And leave the rest to minds more informed than mine).
Applying a little bit of algebra, the constants and the variables are separated into...
[i]t[/i] * ( [i]f[/i] + [i]i[/i] )
1782 = -------------
[i][b]c[/b][/i]So we now have a Weapon of Math Instruction to use in the War on Cheaters and Redundant and Unnecessary Redundnacy - the number of operation&seconds per credit claim of work.
Actually, do you mind if I call it the &Cobblestone Constant& from now on?
&No, I don't... but so what?& I hear you cry.
&It means that every user's host has to yeild that constant when the variables are plugged into the equation&, I say.
&Uh-huh... right.& you respond suspiciously.
&You've got nothing better to, don't you&?
.o0(Actually, I do.
Go get that coffee I told you to make - It should be ready about now.)
I made a list of the last few workuntis I crunched, and from that made several lists containing the fs, is, ts, and cs of my colleague crunchers.
Then I tried to figure out what each individual host's margin of error (e) relative to the Cobblestone Constant is when the whetstone, dhrystone, claimed credit, and time are substituted for f, i, t, and c.
The formula is kludgy, and when I have some spare time I'll redo it in TeX.
[i]t[/i] * ( [i]f[/i] + [i]i[/i] ) |
| 1782 - ------------- |
[i][b]c[/b][/i]
[b]e[/b] = --------------------------
1782Here's what I came up with:Workunit
CPU Time Claim Grant Float/s
======== ====== ======== ===== ===== ======= ======= ========== =========
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=76328[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=467[/url] .39 23.38 3.55
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=76328[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=805[/url] .62 23.38 8.58
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=76328[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=094[/url]
.21 23.38 5.88
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=76328[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=626[/url] .38 23.38
390.24 9.556%
-------- ------ -------- ----- ----- ------- ------- ---------- ---------
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=56315[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=000[/url]
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=56315[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=654[/url]
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=56315[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=726[/url]
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=56315[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=626[/url]
390.24 9.471%
-------- ------ -------- ----- ----- ------- ------- ---------- ---------
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=11546[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=060[/url] .98 31.16
661.24 7.603%
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=11546[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=451[/url] .22 31.16 6.61
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=11546[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=302[/url] .62 31.16 8.21
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=11546[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=626[/url] .16 31.16
390.24 9.32%
======== ====== ======== ===== ===== ======= ======= ========== =========
CPU Time Claim Grant Float/s
======== ====== ======== ===== ===== ======= ======= ========== =========
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=43679[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=070[/url] .29 35.29 4.54
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=43679[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=740[/url] .30 35.29 6.95
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=43679[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=232[/url] .22 35.29 0.10
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=43679[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=626[/url] .48 35.29
390.24 9.63%
-------- ------ -------- ----- ----- ------- ------- ---------- ---------
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=60539[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=870[/url] .25
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=60539[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=626[/url] .71
390.24 0.22%
-------- ------ -------- ----- ----- ------- ------- ---------- ---------
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=02652[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=645[/url]
681.61 4.579%
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=02652[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=234[/url]
519.48 4.365%
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=02652[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=626[/url]
390.24 6.14%
======== ====== ======== ===== ===== ======= ======= ========== =========
CPU Time Claim Grant Float/s
======== ====== ======== ===== ===== ======= ======= ========== =========
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=87287[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=421[/url]
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=87287[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=623[/url]
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=87287[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=626[/url]
390.24 7.211%
-------- ------ -------- ----- ----- ------- ------- ---------- ---------
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=91013[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=794[/url] .69 28.40
658.80 7.964%
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=91013[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=259[/url] .70 28.40 6.66
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=91013[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=626[/url] .40 28.40
390.24 9.30%
-------- ------ -------- ----- ----- ------- ------- ---------- ---------
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=27877[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=580[/url]
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=27877[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=527[/url]
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=27877[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=626[/url]
390.24 3.28%
======== ====== ======== ===== ===== ======= ======= ========== =========
CPU Time Claim Grant Float/s
======== ====== ======== ===== ===== ======= ======= ========== =========
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=35169[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=233[/url] .08 31.08
767.13 8.286%
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=35169[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=626[/url] .08 31.08
390.24 9.44%
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=35169[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=833[/url] .21 31.08
826.75 9.00%
-------- ------ -------- ----- ----- ------- ------- ---------- ---------
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=06676[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=543[/url] .24 36.00
593.97 7.783%
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=06676[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=164[/url] .00 36.00 4.63
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=06676[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=189[/url] .16 36.00 9.77
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=06676[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=626[/url] .12 36.00
390.24 8.219%
-------- ------ -------- ----- ----- ------- ------- ---------- ---------
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=33995[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=462[/url] .82
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=33995[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=626[/url] .35
390.24 8.108%
======== ====== ======== ===== ===== ======= ======= ========== =========
CPU Time Claim Grant Float/s
======== ====== ======== ===== ===== ======= ======= ========== =========
916.14 7.931%
Calculated from the averages:
======== ====== ======== ===== ===== ======= ======= ========== =========
CPU Time Claim Grant Float/s
% ErrorGo get that coffee I told you to make - We need it now.
&Woah, there!
This is getting too complicated!&, I again hear you cry. &Couldn't you just multiply the time by the claim and compare that to the other crunchers?&
Well, yes, but that doesn't tell you where the source of the discrepancy is.
And while you're sipping that coffee, let me make clear that I'm not accusing anyone of cheating.
I'll let the numbers do all the finger-pointing, OK?
Let's look at four workunit/host pairs: I've chosen the one with the lowest error, and the three with the highest.
Since we know all of the values of the four variables in the formulae, either from the results page or the host information pages, let's look at what happens when we try solving for one of the known variables and calculate the error relative to the known, but unused, variable.
The various equations I'm using are...
1728 * [i]c[/i]
[i]t[/i] = --------
[i]f[/i] + [i]i[/i]
1728 * [i]c[/i]
[i]f[/i] = -------- - [i]i[/i]
[i]t[/i]
1728 * [i]c[/i]
[i]i[/i] = -------- - [i]f[/i]
[i]t[/i]
======== ====== ========= ===== ========= ======== ========== ======= ========== ======= ==========
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=56315[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=726[/url]
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=60539[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=870[/url] 23.7 27.62325%
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=35169[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=833[/url] 25.0 29.0.50
[url=http://setiweb.ssl.berkeley.edu/workunit.php?wuid=87287[/url] [url=http://setiweb.ssl.berkeley.edu/show_host_detail.php?hostid=623[/url] 54.71050%
0.14 53.29520%
358.87 114.1.96 131.1.29 877.11606%
======== ====== ========= ===== ========= ======== ========== ======= ========== ======= ==========
IntegerAnd now we can tell what information is suspect.
The first row is used as a control, since it had the least overall error with respect to the Cobblestone Constant (Not strictly a control but for the purpose of this demo, it is).
Note that the re-calculated values are failry close to what is reported in the workunit's report and in the host info page.
The largest discrepancy is in the recalculated floating point benchmark, but even so it's less than 10% in error.
As we move along to the second and third entries, the discrepancies become clearer.
Again, it is the recalculated floating point score that is most suspect - Its error is nearly twice that of the next largest error.
And then, the moetherlode: The last, most suspect, most erroneous entry.
It's got a 54% margin from the Cobblestone Constant, at least twice as much in the recalculated time and floating point operations, and seventeen times more error when it comes to the integer --
[an audience member shouts something]
I'm not accusing you of cheating!!
I'm just overwhelmed by the margin of error...
Where was I?
Oh, right!
So now that we know that the benchmark error is large, we can look at the error's spread.
In other words, how close the errors of the variables are with respect to the margin of error when checked against the Cobblestone Constant.
I would suggest that (if it isn't already) workunit results should be checked against the Cobblestone Constant as part of the validation process.
If the error is significant (which only the select few who run the show can define), then consider whether or not that error is spead equally among the four reported factors: Whetstone, Dhrystone, claim, and time.
Based on this comparison, actions can be taken.
For a quick example, let's say that a user has tampered with the Whet-/Dhry-stone values.
This process would flag it, and the server could demand (on the next communiqu&, of course) that the benchmars be re-run and returned immediately.
We already have a policy regarding false credit claims.
And as for tampering with time... well... if a workunit was sent yesterday and was returned today, I can't go claiming that it took me five days, can I?
&You were going to say something about redundancy?
I'm waiting...&
.o0(Tough crowd!)
Let's say that a 10% error with respect to the Cobblestone Constant is considered a significant error, and that any WU returned to the server with a significant error is scrutinized for causes.
Let's also say that for recalculation, if there is a recalculation margin of error 2.5 times greater than the Cobblestone Constant error, that returned WU is considered suspicious.
Now the servers ask the host to re-run the benchmarks, and if the returns are similar to the ones on file, then the WU and host are accredited.
If it isn't, a new WU is sent out for quorum - The suspect host's workunit result is no longer considered as valid for the purposes of accreditation.
To summarize...&li&The servers would be able to decide on a WU-by-WU bases if a valid quorum is met,&/li&&li&Locate the source of anomalous reports and act accordingly&/li&&li&send workunits with sufficient redundancy when necessary,&/li&&li&prevent cheating, and &/li&&li&accredit faster.&/li&[crickets chirping]
It's a Good Thing.
[light cough from audience]
[audience member stands up]
&Yeah... uh... you suck!&
____________
Volunteer tester
Wow... this could need a whole coffee pot.
____________
Volunteer tester
Told 'ya so.
It's my last major post for the next week or so... lotsa work to do, and I've been dying to get this out since the &Is Cheating Still Possible& thread.
____________
Volunteer tester
& Wow... this could need a whole coffee pot.
And two or three tylenol and a calculator
Volunteer tester
Nice work, NA&5boroK.
I must admit I didn't take that mug of coffee, so I may have missed something obvious.
Here's my question (or rather comment?): you seem to assume that benchmarks do give some realistic numbers. In ideal world this would be true, but realistically we are running benchmarks that are differently optimized than actual cruncher binaries. Like it used to be the case for BOINC CC for Windows and Linux (much higher benchmark results with Windows binaries on exactly the same hardware). Or with virtually non-optimized Einstein@home Linux cruncher (which then claims quite higher credits based on insanely long run-times).
I wish we had optimal benchmark and cruncher software ...
____________
Volunteer tester
Well, I am impressed ...
Just one or two points, firstly, not being a math wiz it could just be me, but I don't follow the first equations transform ... somewhere when I try to do it I get a 2T factor in there
... so, it could just be me ...
Second, this could catch cheating, though primarily only those that applied to caculation of credit.
Which is a valid thing to do, except, what do they care about being super precise? Yes there are those that are passionate about it, but so what?
Third, the driving factor for redundency is to validate the science, and this does not do anyting for that ...
Fourth (ok, I lied...), this adds significant computational overhead and error checking for a &who cares?& part of the system for little overall system improvement.
Fifth, I don't follow the end part ... as in, HuH?
____________
Volunteer tester
That's why I said &if there is a recalculation margin of error 2.5 times greater than the Cobblestone Constant error, that returned WU is considered suspicious&.
The errors are first compared to what's expected (1782 op*sec/credit), then the internal error is calculated.
If the internal errors are appreciable (like 1%, 1%, 1%, and 1000%), then the WU is rejected.
I'm running on a PowerPC and Linux - Doubly screwed in the benchmarks, so I took that into account.
____________
Volunteer tester
somewhere when I try to do it I get a 2T factor in there
... so, it could just be me ...
No - you're right.
There's something wonky about the formula, and I think it has to do with the fact that you're multiplying an arithmetic average...a + b
[b]2[/b]...by a rate.
The 2 goes into the 1/864, yeilding 1/1782.
Then there's an implied multiplicative inverse.
It's also possible that the formula I found is actually for two processors...
Second, this could catch cheating, though primarily only those that applied to caculation of credit.
Which is a valid thing to do, except, what do they care about being super precise? Yes there are those that are passionate about it, but so what?
[shrugs shoulders]
Because Science demands 2&sigma of accuracy?
Third, the driving factor for redundency is to validate the science, and this does not do anyting for that ...
Well, if three valid (read: low error) results corroborate each other, then why send a fourth?
And of the three are in disagreement for whatever reason, more WUs are sent as necessary.
Fourth (ok, I lied...), this adds significant computational overhead and error checking for a &who cares?& part of the system for little overall system improvement.
It's hard to cheat as it is, and the discussion on redundancy has been very thorough.
I don't think that a five-liner in C is that much overhead... Still, I hadda let it out, y'know?
Fifth, I don't follow the end part ... as in, HuH?
.o0(A cheater.)
____________
Volunteer tester
& Here's my question (or rather comment?): you seem to assume that benchmarks do
& give some realistic numbers. In ideal world this would be true, but
& realistically we are running benchmarks that are differently optimized than
& actual cruncher binaries.
I used to work for a large mainframe builder.
One day I was talking to one of my managers and we were talking about optimization, and he told me a story about a competition where some agency was comparing computers and compilers and came up with a benchmark suite of do-nothing programs designed to load the computer and produce some times.
One vendor kept completing the benchmarks in '0' time.
The programs were something like:
(do lots of work)
Print &done&
The vendor's compiler had a really cool optimizer that looked at the I/O statements and worked backwards, eliminating anything that couldn't contribute to the output.
Since there were no inputs, and just the one output, it optimized-out the entire benchmark.
The moral is: you do not want to optimize benchmarks.
____________
on redundancy, what's it say about this other chaps thread about ?
What can 'too many successes' mean?
Volunteer tester
Damnit, Jim - I'm a User, not a Developer!
Most likely (and I'm pulling this one from out of my behind) there were too many returns that didn't jive with each others' results.
I think that WU (and quite a few others in recent days) is from a bad, moldy, noisy, and dusty batch.
____________
Volunteer tester
& Third, the driving factor for redundency is to validate the science, and
& this does not do anyting for that ...
& Well, if three valid (read: low error) results corroborate each other, then
& why send a fourth?
And of the three are in disagreement for whatever reason,
& more WUs are sent as necessary.
At the moment 4 are issued right out of the box.
So far, (though I have been grossly distracted by Dell screwing up my computer order, I hope to finish my study on that policy this week) it looks like that is the best stategy for issuing work.
& Fourth (ok, I lied...), this adds significant computational overhead and
& error checking for a &who cares?& part of the system for little overall system
& improvement.
It's hard to cheat as it is, and the discussion on redundancy has been
& very thorough.
I don't think that a five-liner in C is that much overhead...
& Still, I hadda let it out, y'know?
The problem is not the length of the code, it is the length of the code vs. the number of times it is used. So, this is not necessicarly a &small& change.
I have an SQL study that shows the reason that a &small& change in a statement can change the run time from 4 Hours 15 minutes to 15 miliseconds ... the first query actually polwed through 4.5 Million rows to return 2 ... the minor change I made eventually did single full table scans ... with the later veersions of Oracle I could have made a bit-mapped index if there had been a need for more speed.
I am just saying that for the nominal improvement in credit reporting may not be worth the computational expense.
& Fifth, I don't follow the end part ... as in, HuH?
I was just saying that I don't follow the second half of the analysis. That may be because my brain is on vacation, but, if you want to succeed in getting something like this added, people need to understand it.
Getting 2 Sigma accuracy on the credit claims are not needed.
Only that accuracy is needed for the science part of the process and this proposal does not address anything along that line ...
One of my greatest lessons was learning that for almost all of what we do, 3 digits of accuracy are more than enough. The slide rule &Rule& and using &T&-Shirt size in estimation proves this well enough.
____________
Volunteer tester
@ Paul Buck,
What did DELL do now?
Volunteer tester
& @ Paul Buck,
& What did DELL do now?
I ordered a system last week, their site said that it was shipped Saturday for Monday delivery ... UPS did not have it on their system. Last night before I went to bed it was still not there ... the Dell guy I talked too could not find it either.
This morning UPS says they got the package at 8:10 last night ... I just checked and UPS also says that it is &Out for Deliveery& ...
I guess that means I will see it today ... hopefully EARLY ...
Been driving me NUTS to wait ... I did not get any work done yesterday ... it also fired up my anxiety disorder (sigh) ... The good news is that this will take me to 7 computers in the &Farm& with the Dell being a Dual Xeon 3.4 GHz 2 M cache (1 G main memory), with 10K RPM 70 G hard disk ... should be pleasently fast ...
With even better luck Apple will be announcing a new Power Mac G5 with dual Dual-Core processors later this year for computer #8 (my Christmas present) ...
Though primarily bought for BOINC processing, if the Dell is as good as I hope it will become my new PC Workstation replacing, in function, both my RAID server (I will move the 335 GB RAID array to the Dell) and my misc. PC programs to it also.
The new G5 will become my PRIMARY workstation to replace the Dual G5 PowerMac I have ... that will become probably just a full time BOINC system, though I may use it to host a test instance of my site ...
____________
Volunteer tester
... the Dell guy I talked too could not
& find it either.
This means the Dell guy said to himself &Oops, We better get this system out the door& so he called shipping and got them to get it on a UPS truck
& This morning UPS says they got the package at 8:10 last night ... I just
& checked and UPS also says that it is &Out for Deliveery& ...
Unless, it was shipped &next day&, Your puter spent last nite at the &Regional Distribution center& and will be sent to your local UPS center today.
This means you won't get it till tomorrow.
I hope I'm not right, but I use UPS for shipping all the components of my machine rebuilds.
I have some experience with them.
Last Sunday I envisioned you standing outside the regional distribution center, Banging on the doors.
I hope you get it today
Volunteer tester
& ... the Dell guy I talked too could not
& & find it either.
& This means the Dell guy said to himself &Oops, We better get this system out
& the door& so he called shipping and got them to get it on a UPS truck
I guess ...
& & This morning UPS says they got the package at 8:10 last night ... I just
& & checked and UPS also says that it is &Out for Deliveery& ...
& Unless, it was shipped &next day&, Your puter spent last nite at the &Regional
& Distribution center& and will be sent to your local UPS center today.
& means you won't get it till tomorrow.
Yes, it was next day ... cost me $80 for that ...
& I hope I'm not right, but I use UPS for shipping all the components of my
& machine rebuilds.
I have some experience with them.
I use them a lot too ... and this seems to be Dell, not UPS ...
& Last Sunday I envisioned you standing outside the regional distribution
& center, Banging on the doors.
& I hope you get it today
I should ... see:
==========
A.M. WEST SACRAMENTO, CA, US OUT FOR DELIVERY
12:25 A.M. WEST SACRAMENTO, CA, US ARRIVAL SCAN
Apr 4, 2005
9:19 P.M. SPARKS, NV, US DEPARTURE SCAN
SPARKS, NV, US ORIGIN SCAN
=============
However, also see from Dell:
Carrier: UNITED PARCEL SERVICE
Delivery Estimate: April 4
____________
Volunteer tester
& Yes, it was next day ... cost me $80 for that ...
That's typical, they screw up and you have to pay for it.
hope you enjoy it,
Nice approach to the problem.
However, it seems to me that you are incorrectly applying the rules of finite algebra to a statistical problem.
Specifically, there is nothing in your calculations that indicates what is an excessively large error.
Statistically, this is usually determined as + or - a number of standard deviations (e.g., + or - 2 s.d. covers about 95% of a normal distribution, etc.).
This leads me to a second question...how do we know what the shape of the error distribution actually is?
It is quite possible that the error distribution is higly skewed and that some errors that appear (to the casual observer) to be excessive may in fact be statisitically reasonable.
I hope that the above is not seen as being excessively critical, but I am curious about where one would draw the line regaridng acceptable error?
Volunteer tester
Sorry not keeping eye on thread.
Ext'al HD crashed, taking OS + data w/ it. [grrr!]
Quickie replies:
@Paul: wasn't thinking retroactively - post is RFC for future implementation
@Scott: Failed stats - Berkeley defines &significant& error rate
@All: apology for guido-talk. No GUI - Just Lynx.
____________
Volunteer tester
Sorry not keeping eye on thread.
Ext'al HD crashed, taking OS + data w/ it. [grrr!]
[Addendum] It turns out that fsck didn't finish properly.
I kept trying to repair it in Disk Utility, but every time I got an error saying that the hash table was full.
I dropped into single-user mode and forced fsck to clean up the mess it left.
At least I didn't lose my data... year's worth of stuff I'd have to have redone... [phew!]
Quickie replies:
@Paul: wasn't thinking retroactively - post is RFC for future implementation
[Addendum] I was thinking of applying this idea only to future WUs that are sent out.
Going through all the hosts and returns would be an insane amount of work.
As for the &communications overhead&, a field that flags a request for benchmarking shouldn't be too tough to handle, and the infrastructure for passing whet/dhry scores is already in place.
One of my greatest lessons was learning that for almost all of what we do, 3 digits of accuracy are more than enough
And that's why 3.14 is &good enough to do ancient Greek temple construction work&.
(&History of Pi& was a darn fine book!)
@Scott: Failed stats - Berkeley defines &significant& error rate
[Addendum] Even though I've written a , calculus and statistics are just that much beyond me.
I do know that &significant& is an arbitrary amount, and can be derived through thought and reasoning - I just don't have that mathematical capacity at the moment.
My guess is that for an error to be significant, it has to be a percentage error from the Cobblestone Constant [CC] that is appreciable in the context of all the other claims for a given WU.
Yup - I said it: &appreciable&. Another arbitrary value.
Let's say that we have four WU reports, and their CC errors are {5%, 6%, 4%, 10%}.
Let's call that set S1, and define a second set [S2] as {1%, 2%, 0.5%, 3%}.
What would be more appreciable: the 10% in S1 or the 3% in S2?
I'd say that it depends on the context of the set.
Is a 10% margin of error with respect to the average of S1 as large as a 3% margin with respect to the average of S2?
What margin of error between margins of error is considered significantly erroneous is not my place to define.
.o0(I'll have to rewrite that last paragraph in symbols... text is not my best mode of communication.
I'll TeX it when I can...)
how do we know what the shape of the error distribution actually is?
I have no clue whatsoever.
Let's not forget that the sample I used here is puny, and definitely will not be useful for the purposes of applying it to the population.
Let's also not forget that I have a WU-killing machine which I did not use in the sample.
Since there are many zombie BOINC boxes out there, I wouldn't begin to hazard a guess.
I hope that the above is not seen as being excessively critical, but I am curious about where one would draw the line regaridng acceptable error?
I put the pencil in Berkeley's hands and let them draw the line.
I can take the criticism 'cuz I can dish it out just as well. :-)
@All: apology for guido-talk. No GUI - Just Lynx.
[Addendum] I couldn't remember if Lynx truncates lines in a text-area, so I was trying to conserve space.
It was also extremely annoying trying to edit the post... each time I hit left-arrow, I was sent back in the history...
____________
...and so I decided to do something about it.
Copyright & 2015 University of California}

我要回帖

更多关于 volunteer作文 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信