Following the release of Barack Obama’s long-form birth certificate, a Gallup poll showed that still fewer than half of Americans were fully convinced of his birth in the United States.
And millions of Americans have now heard reports that the Presidential birth certificate is forged.
Bright Horizon Press is pleased to announce the upcoming release of a major new book on the long-form birth certificate. Is Barack Obama’s Birth Certificate a Fraud? is the culmination of a 3 month independent investigation by author John Woodman, and promises to provide answers and insights to the questions Americans are still asking.
The book’s tentative release date is September 15, 2011.
For more information and to pre-order copies, please visit www.ObamaBirthBook.com.
I received the book today.
Can we discuss it here or on the internet forums and quote from it ?
(as stated on page 6)
Do we need the prior written Permission of the Publisher,
will they allow it ?
(New Horizon Press)
175 “endnotes”, references , no alphabetical list of keywords with pages
where to find them , ~240 pages,~1500letters per page–>~300KB uncompressed
Sure, we can discuss it here.
at this point, what do you consider the strongest remaining argument
of the birthers ?
(yes, I know you think they have “none”, but the best out of these
no summaries, no abstract
do we really want everyone to read the
whole book ? That makes the audience
Best out of none? Wow, that’s tough.
At this point, in lieu of any startling new evidence, I regard birtherism — both legs of it — as disproven.
I mentioned in a private email the other day that I can think of ONE possible remaining pathway to demonstrating forgery in the PDF. It would be time-consuming and expensive. Odds of successfully doing that? Virtually zero.
As for the “natural born citizen” claims, the best arguments seem to be pretty much the ones that lost in US v Wong Kim Ark. Legally speaking, it’s hopeless. There’s a historical source or two that is faintly supportive. But against the mountains of other historical evidence, it’s like the classic film Bambi Meets Godzilla.
You already said it best: They have no remaining arguments. Every argument they’ve produced, as far as I know and can think (and I can think of probably at least 60 different arguments and claims, and that’s probably an underestimate) has been shown to be completely invalid, wrong, or flat-out false.
And the obvious reason for that is because none of what they’re claiming is true.
Let me make clear, though, in regard to the above comment.
I never completely discount any possibility, no matter how remote. Asteroid hit the earth this year? Yeah, there’s a chance. Not much of one, but there’s a chance. You’re gonna win $350,000,000 in the lottery? Yeah, there’s a chance. Obama’s from another planet? Yeah, there’s a chance. 1 in about 1,000,000,000,000,000,000,000,000,000,000,000,000, but it’s still a chance.
But for any birther to take any hope from my comments above would be about like Lloyd in the movie Dumb and Dumber:
Lloyd: What do you think the chances are of a guy like you and a girl like me… ending up together?
Mary: Well, Lloyd, that’s difficult to say. I mean, we don’t really…
Lloyd: Hit me with it! Just give it to me straight! I came a long way just to see you, Mary. The least you can do is level with me. What are my chances?
Mary: Not good.
Lloyd: You mean, not good like one out of a hundred?
Mary: I’d say more like one out of a million.
Lloyd: So you’re telling me… there’s a chance!! YEAH!!!
the deadline of the book was ~Sep.2011 , no new arguments since then ?
Can we make a wiki from your book and update it ?
I was posting here http://www.thefogbow.com/forum/viewtopic.php?f=73&t=7866&sid=14fac145eb35f1e16ed9f48b20eb91c0
when night came over Springfield
you’ll find there also their interpretation of “chance” …
blurring from 1961 security-paper-form copied onto 2011 security-paper ?
TXE from a paperclip below the paper when stamped ?
no security paper on the 1961 form to explain blurring
paperclip too big for the TXE-area
I’d like to see an overlay of the white dots and the Savannah Guthrie seal
or another , better resolution Hawaii seal so to see whether
they match or are related
page 97,Tom Harrison claim
what’s with Harrison’s other claim , the 2 pencil marks on the right ?
what’s with the Zebest-claims from the posse
Zebest not mentioned in expert-parade
Not sure what you’re asking about the pencil marks.
Mara Zebest kind of appeared after I’d written most of the book. I looked at her claims but didn’t see anything new. All the things she was saying had already been said by somebody else.
I gave a bit of an update on a couple of points in this thread.
Gillar has a video about it at youtube
also repeated in his interview with Harrison on April 2, 2012
You know, I’m getting tired of having to refute every frankly idiotic claim made by the birthers.
They will never run out of idiotic claims. I’ve already factually refuted at least about 60 in a row. They are ZERO FOR SIXTY. Or more. Who knows the exact count?
Isn’t sixty or so enough for people like you to get a clue that they have no case at all?
When does it ever stop?
Mark says the clipping mask had to be set by a human to hide information. I’ve commented on this before. Maybe, maybe not. There’s some reason to think it’s machine generated. But in either event, at most, it’s no evidence of anything more sinister than somebody trying to make the image look nice for public presentation.
The idea that those two smudges at the right are “vital statistics markings” is literally laughable.
Mark tries to head off the obvious — that they are smudges picked up from the scanner glass — by saying that he’s consulted “computer experts” on the matter:
“They’ve told me that the color values of the markings at the bottom right fall within the value range of the vital statistics markings found throughout the document. They appear to have been made with the very same pencil.”
I don’t know who Mark consulted (although I can guess), but he sure as heck didn’t consult me. Why not? He knew he wasn’t likely to get an answer he liked. I’ll tell you who else he didn’t consult, either. He didn’t consult Neal Krawetz, who is a real and widely recognized PhD computer expert. And I would bet good money he didn’t consult any of the first three professional computer forensics people that WorldNetDaily hired to evaluate the birth certificate but then dropped like hot potatoes when they didn’t provide the answers they wanted.
They’re similar in color to the pencil marks. So what?
I can immediately think of several great big huge elephants in the room that Gillar doesn’t touch here. Why not? They shoot down his forgery theory.
The first is that there’s an obvious innocent explanation — grayish smudges on the scanner glass — and no plausible “forgery” explanation. They’re there in the PDF but not in Savannah Guthrie’s photo. Why not? If they were in a forgery file, and the forgery file was printed, they would’ve been printed too. Oh, wait — “the forger must have turned off that layer.” Why? “Because he was a stupid forger.” The same tired excuses they use again and again for the fact that their theory just doesn’t make any sense.
2. Worse, where did the raised seal on Savannah Guthrie’s photo come from? Gee, I don’t know. Why would the Hawaii Department of Health swear that Obama’s PDF has all the right info that matches the one in their files? They have no answer for this. “Gee, I don’t know. The Hawaii Department of Health is obviously in on it.”
If the Hawaii Department of Health is in on it, then why didn’t they just print up an authentic looking paper certificate? You never get an answer to that one, either.
3. The PDF cannot possibly be an original document that produced the AP document. Why not? The AP document contains details the PDF doesn’t. They don’t just magically appear.
Neither can the AP be an original document that produced the PDF. Why not? Well, for one thing, the AP document contains some of the safety paper background but only part of it. The AP document doesn’t have all the color.
Obviously there’s a third document from which both the AP and PDF documents come. I wonder what it is? Oh, wait. We have a paper document with a raised seal, and the State of Hawaii testifies that they sent President Obama two paper documents with raised seals.
Gosh, I’ve got a radical idea. Maybe the paper document with the raised seal is the original that the AP document and the PDF are copies of. Ya think?
4. At the risk of pointing out the idiotically obvious — the two marks that Gillar calls “vital statistics markings” don’t look like any identifiable characters at all. Vital statistics markings are numbers. One might argue that the lower smudge is a “0,” but they would be wrong to do so, as it seems to be filled with lighter gray. As for the top smudge, it’s hopeless. A backwards C? It’s clearly thicker than any pencil mark on the paper.
To be honest, they look like drops of something liquid or semi-liquid that got spilled or splashed or stuck onto the scanner glass, and then were partly or mostly removed. They certainly don’t look like pencil markings.
5. More border is shown in Savannah Guthrie’s photo than in the PDF. Under their scenario, how is this remotely possible? It isn’t.
There are five different elephants in the room that didn’t require much thinking at all to produce. Any one of them is sufficient to sit on Gillar’s claim and squash it flat in terms of it being any real evidence of “forgery” at all.
I’ve talked to Mark in person. I believed him, and willingly gave him the benefit of the doubt, when he told me that he valued the truth above pushing a particular agenda. After he did the propaganda piece for Arpaio’s press conference, I frankly lost whatever respect for the guy I might have had.
thanks, I had missed this too.
> Neither can the AP be an original document that produced the PDF. Why not?
> Well, for one thing, the AP document contains some of the safety paper
> background but only part of it. The AP document doesn’t have all the color.
did they (WH,Obama campaign) deliberately filter out the safety paper background
in the AP ? I haven’t found any hints to it outside the left-side certificate-area.
aren’t the pencil marks outside the certificate area, on the security
paper/Onaka area ?
I don’t understand what you mean with elephants here
why can’t you just debunk the claims without pushing your agenda,
which usually becomes the main part of your posts ?
(how stupid they all are, what all they did wrong etc.)
Do you have any idea what a monumental task it would be to filter out the safety paper from every fiddly part of the paper? Get real.
Buy John’s book if you want the answer. He deals with that in there. Suuurely 20 bucks is not too much of a sacrifice if you are really looking for answers rather than to waste everyone’s time repeating themselves.
did they (WH,Obama campaign) deliberately filter out the safety paper background in the AP ? I haven’t found any hints to it outside the left-side certificate-area.
No. It’s an image of a photocopy. When the original was photocopied, a lot of the safety paper background was lost.
I don’t understand what you mean with elephants here
Sorry, it’s an American expression. An “elephant in the room” is a great big huge enormous issue that is perfectly obvious, but which someone avoids talking about and pretends it doesn’t exist.
why can’t you just debunk the claims without pushing your agenda, which usually becomes the main part of your posts ? (how stupid they all are, what all they did wrong etc.)
I’ve been pretty patient now for TEN MONTHS. Honestly, I thought when my book was released that quite a few people would say, “Oh. So that’s what’s going on here.”
The birthers didn’t bat an eye. They could not and have not refuted even one substantial point in the book. They’ve tried on a few of them, but the results have been disastrous.
Most of the points made in the book, though, they’ve simply ignored. They’ve simply continued to push claims that have been discredited both by myself and others, things that are known to be false.
In fact, they went out and essentially recruited Sheriff Joe Arpaio to lend authority and credibility to their nonsense.
You have to understand as well that quite a few points are raised that are in fact things I’ve already dealt with. As time has gone on, the entire thing has just gotten more and more ridiculous. So if I’m about out of patience with it, that’s why.
>> did they (WH,Obama campaign) deliberately
>> filter out the safety paper background in the AP ?
>> I haven’t found any hints to it outside
>> the left-side certificate-area.
> No. It’s an image of a photocopy. When the
> original was photocopied, a lot of the safety
> paper background was lost.
not only a lot of the safety paper was lost but all,
outside that left border. I tried to recover it with irfanview
(enhance colors,gamma correction=0.01,contrast=80)
which works well to show small color differences
and picks up clearly the old certificate which
lay behind the new one when AP copied it.
But absolutely nothing of the safety paper.
Remember, this was one of the points in the
2012.01.21 Gillar debate.
It’s the same with the CBS-photograph with the 2 fingers.
Is the photocopier so much different from
cameras which do have lots of safetypaper background ?
I speculate it was a color scan and then they deleted
everything greenish by computer and then printed it.
For a moment I thought that might also explain
the “babyblue” background in the AP, but the CBS-photograph
The elefants -we had it with RC calling it your issue
(AP more details than WH–>WH not the original)
in the debate.
One big,main issue – but here you had 5 small issues.
“out of patience” would result in ignoring them but not
exaggerated insulting, which takes additional time.
I guess it’s rather the whole birther vs. obots
Dems vs. Reps atmosphere which you see in the blogs,forums.
You are expected to be that way so to be accepted in your group.
You show in which group you are this way.
birthers must know by now that there are problems with their
arguments. But they can’t admit it, that would make them
look silly in their group, make them an outsider.
Some have financial interest in it, so they keep it going.
I don’t think Zebest,Corsi,Gillar,Zullo still believe what they say.
“out of patience” would result in ignoring them but not
exaggerated insulting, which takes additional time.
I’ve been ignoring them, pretty much, since March. Aside from this week, and right after Arpaio’s press conference (March) I can’t recall the last time I wrote about them, or anything about the forgery theories.
I’ve commented here because you brought up points or asked questions.
Secondly, the thing that you somehow don’t seem to understand is that the “insulting” — as you call it — is not exaggerated. I spent probably a YEAR patiently analyzing their positions, answering questions, etc., while other people ridiculed them. It is long, long past completely obvious for those of us who have been following these claims for a good while now that their claims are not only disproven, many of them are literally ridiculous.
by now, everyone knows what you think about them.
No matter, how often you repeat it.
Which scarcely, matters, because John, is correct. The only reason, he needs, to repeat it, is that Birfers can never, accept reality.
I just figured out that the 2008-colb behind the 2012 picture
was already present in the copies that the WH handed
out to the press, and not in the AP scanner.
Because the 2008-colb is also visible in the copies
from CBS and CSM
I think this blog is not so suitable for discussion.
How do you find new posts or replies to your posts ?
Can’t we make a forum ?
You could always run your own forum.
This is a blog. A blog is a place where the owner posts his or her research and views. Comments are important but are always secondary to the articles. I recommended that you sign up for The Fogbow Forum where you managed to alienate almost everyone there and got yourself relegated to FEMA Camp 7-1/2. Most political forums will not even allow Birthers because they consider them fringe kooks akin to 911 Truthers and moon landing deniers. The choices are limited. I think you should go post at Apuzzo’s blog, Orly’s blog or maybe at ladyforests’ blog. 😉
Please don’t send him over to Apuzzo’s. Or Orly’s. I like Guenther too much to send him there.
Both places are really only asylums for the terminally deluded.
I did a systematic computer search on the
1407 connected components in the textlayer.
There are many examples of identical letters !
C(E)RTIFICATE OF LIV(E) BIRTH
CER(T)IFICATE OF LIVE BIR(T)H
two square boxes
21 times “d” with 158 pixels ,
all 21 identical
also HUSSE(I)N OBAMA,I(I)
and O(B)AMA,II … O(B)AMA
as already found in July by Cadwaladr
that should confirm my theory, that in a first
step similar letters were replaced by identical ones.
But they got a bad compression rate for that text-layer,
~68K vs. ~55K with gzip that uses the same “Flate” algorithm.
I don’t know why.
6 months behind and a dollar short.
Check out JBIG2 lossy compression my research impaired friend.
thanks. reading …
I didn’t check these yet, since deflate was used.
And we have 8 one-bit-layers ?!
And the WH apparently avoided proprietary software ?!
But it looks similar to the layer-splitting strategy
With JBIG2 it would have been better but I believe that preview may not support JBIG2 lossy compression and thus selected FLATE.
Some software just does not support the full range of encodings possible. You get the best it has to offer.
as you can see from above link, the compressed PNG only has
(generated with irfanview) 43558 bytes !
PNG also uses the Flate compression algorithm
(as in PKZIP,GZIP,7zip…)
Why is it so more efficient than what they used in the WH-pdf ?
They went through the effort to make that unusual separation into
eight 1-bit-layers and one 8-bit-layer and then they used that lousy
Flate-variation. Maybe that can help to identify the software that
they used. Presumably an ancient one. They also used the old
pdf-1.3 for the .pdf
There is one proprietary variation in deflate, I think it’s used in 7zip,
but not in gzip or png AFAIK,
which allows 64K backward search instead of only 32K
Learn more about Mac Quartz. Let me know when you have caught up with the rest of the world
I didn’t find what the rest knew 6 months ago,
where is it ?
wiki says JBIG2 is only since pdf-1.4
I sleep now
Yes, which is why preview used FLATE where the software used for MRC used JBIG2 lossy. There you have your hints as to what software may have been used.
There are so many ways to encode, and it is hard to predict which one will do best. Yes, an indexed PNG will be smaller but this is not how these algorithms work. B&W image masks are not compressed using PNG.
PNG 43558 when indexed format is used rather that RGB and alpha channels.
did Papit,Selany test these Flate-compression programs with various
settings and compare it with the compression rate of the 1-bit-layers in the
Or the DCT compression rate of the 8-bit-layer ?
That should tell us something about the software that was used by the WH
to compress the .pdf and I assume it will explain most of the “strange things”
and can reproduce them
you mean : (?)
first the 1-layer-scan image-file was run through a JBIG2 compression,
that decided to separate it into those 9 layers and encode
each of them separately with JBIG2 into one PDF-1.4
Then that PDF-1.4 was opened by Quartz-Preview, the layers
decompressed but the layer-decomposition was maintained.
Then Quartz-Preview compressed it again by doing the layers
separately, compressing the eight 1-bit-layers with their
Flate and a low-compression-setting (which maybe was the default)
and the 8-bit-layer with DCT (I haven’t yet tested how effective
that compression was) into one pdf-1.3
to recall, we have 8 high-resolution 1-bit layers and 1 low resolution 8 bit-layer.
2 of the 1-bit layers contained groups of white dots.
John Woodman writes on page 95 of his book
(The Scanner with X-ray Vision)
First, the items that the program found suitable to pull out
into high-resolution solid-color layers were “extracted.”
Then,those pixels in the background layer – whatever their
original color might have been – were replaced with safe white
Then, the background layer was optimized. This mixed the
colors of that layer up a bit, exactly as we saw them mixed in the
chaper on chromatic aberration. And when I tested the theory,
the results were as predicted: varying shades of green pixels, that
previously had been white.
can you show that picture ?
I could not verify this. While they become a little greenish, they are still
almost white and clearly different (wighter) from what we see in the background layer.
(with jpg compression)
I doubt that the pixels were replaced with “safe white pixels”.
But why green ones ? In the text-areas they were replaced with almost white ones.
But these were black/dark areas
maybe it was xored with the inverse or such
page 69, the white halo
I can’t verify this either. I extracted the safety paper from the background
and replaced it by an intact,complete safety paper (–> halos removed)
When I sharpen this with irfanview I get totally other effects.
e.g. the D in Dunham’s signature
Shall I upload pics
can we upload pics to this page ?
can we convert the book to a wiki and add to it ?
I made a pic here:
to demonstrate the effect of jpg compression
I started with the backgroundlayer,
24-bit 1276*1612 BMP , 6323910 Bytes ,
removed everything lighter than 168 , added an artificial
security paper background and then compressed it
with irfanview as a jpg , at compression rates
80%,50%,20% . Just checking the file-sizes, I assume
the WH-pdf background-layer was compressed at
50-60%. As the pdf stored it as 8-bit, I also tried it
by first converting to an 8-bit BMP:
Well, nbc said it was compressed with JBIG2,
but the metadata in the WH-pdf says “DCT” , discrete
cosine transform, which is the algorithm used in JPG .
Maybe it was compressed with JBIG2 before Quartz Preview
saw it and at that point the halo was already there ?
This is all just in the background layer, no text was removed here.
I couldnt create a halo by jpg-compression yet
googling for the halo
However oversharpen image produces a halo effect around the edges
Too much sharpening produces a halo effect around the edges.
with the pixels in the whitespace around the text. This produces a
halo around the letters
what John said.
He shows an example with a black text on a grey background.
OK, trying again, 3 times sharpening with irfanview does produce a helo
for me , of size ~1 pixel around the black text.
But it also sharpens the security paper background so that
it really looks strange.
(I forgot to save it after printing)
I noticed there is a small halo in the AP and the obamafile copy. You see it better when you
enhance colors. So, it could be a hardware thing and the halo was already there when the WH
color-scanned Hawaii copy.
but that’s for gif , not jpg
The halo comes from applying a digital unsharp mask for edge enhancement.
It is a huge pain scanning a combined text and image document to sharpen the image
section without getting the halo around every letter.
> Monckton is a gem among the European elite. His brilliance casts a bright light
> across the Atlantic Ocean to ferret out the cockroaches commemorating their own
> genius in having the honor of thinking like every single one of their peers.
I claim having debunked two of John’s claims in the book.
(I’m 80% sure, but someone should check this)
1.) the white halo.
John explains it by sharpening, page 70. The problem with this is, that sharpening would
also have sharpened the background security paper (at least in irfanview) which would
likely be seen. There is no sign that the black background text (the lines, the Lee signature
etc.) were separated from the security-paper-green and then sharpened separately.
Also the halo around the big text-layer letters is too big for being produced by sharpening.
e.g. the first line “CERTIFICATE OF LIVE BIRTH” , it contains big white fields that are not
explained by the JPEG compression blurring of the background layer. The compression
causes much lower effects and doesn’t increase white fields that much.
2.) the X-ray thing, page 93 (extended to all eight 1-bit layers, not just the white dots)
again, John explains it by the blurring from the JPEG optimization and again incorrectly
assumes pixels “lifted” to other layers were always replaced by white pixels in the background
layer. (page 69 line 9-11, page 95 line 24-26)
Sometimes they were replaced by greenish areas with sharp borders to the white areas.
(Areas in the background layer that are behind text-layer-areas and which wer replaced by white pixels.)
To see this, look e.g. at the area in the background layer behind the letters here:
(only the borders of the letters are printed so you can X-ray through them to the background layer)
This was not blurred much by the subsequent JPG-compression of the background.
How were the colors calculated of the areas in the background layer that are
behind the pixels that are lifted to other layers ? I don’t know yet. Maybe the average
of the pixels in some neighborhood, ignoring the lifted pixels.
I do not see these as signs of forgery. Just that a strange unusual software was used.
And that the scanner maybe created large halos for whatever reason.
Or there was an addition step before the JPG compression that created the halo.
Check out MRC and look for data filling. But there are a few additional problems that you have yet to considered. I have also considered sharpening but as I and others have already pointed out, sharpening is unlikely to be responsible for the halo, as the size of the halo would have resulted in unreasonable sharpening of the background layer.
But MRC is known to create halos.
As to JPEG and sharpening or JPEG and MRC halo. There is the additional problem that the text was subsampled to twice the resolution of the background so it is not unreasonable that there will be strange boundary effects.
While I appreciate your efforts, your explanations are too vague to appreciate fully. Perhaps you could try to outline the examples and the experiments you have done so far? So far you are still running behind.
And really, if you cannot learn to properly format your postings, I feel that you are unlikely to see any further responses on my part. Your contributions are impossible to read and decipher.
So focus on MRC halos and unsharp mask, combined with different densities of fore and background and the used of JPEG as a lossy compression.
Also, realize that there are many ways to implement a unsharp mask? But I agree that the mask affects the text as well as the background. Adobe has text sharpening… Which sharpens the text. Hmm…
I appreciate your efforts. A good start.
Look into an unsharp mask whose strength is determined by the amount of contrast as a logical explanation.
As I understand MRC just decides what goes into what layer. It is no compression
algorithm or picture-manipulating algorithm, only a method how to split the picture
and how to organize the subsequent work. So, how can it “create” halos ?
There can at most be “problems” at the boundaries, which is only 1-bit here between
text-layer and background-layer.
The JPG (not JBIG2?) compression can blurr this a bit, I tried it with 50%(irfanview)
JPG compression, which gives approximately the compressed size that we have in the pdf.
here is one example:
The effect is not big enough to explain these things.
I didn’t find it with google either that JPG-compression creates halos.
Here is the whole pdf in original resolution with black-outlined texts as a compressed
5.3MB compressed , 25.3MB uncompressed. uncompress it with
gzip -d outline.bmz , then rename it to outline.bmp
Here are the smaller 1-bit-layers plus my programs to manipulate them:
35 files, 1.1MB currently, compressed with lha
biwh?a.bmp are the 1-bit-layers with black colors
biwh?b.bmp are the 1-bit-layers with original colors (I hope I got them correctly)
executables run under Windows XP command line mode cmd.exe
C-source code is attached to the executables
biwh?a.bmp : black text , biwh?b.bmp : correct colors
You apparently, by your own admission, cant figure out how to format your own posts by simply using word wrap, and yet you are claiming to understand the higher points of graphic design? Gimme a break.
Suranis has a point—your inability to handle a simple, everyday task (composing in an editor with word wrap and performing a copy and paste to the comment widget on the website) casts doubt on your ability to understand nuanced technical details. Moreover, you are putting forth a claim and asking others to verify your work. Not taking the time to make your posts readable is, in my opinion, a pretty blatant show of disrespect to your audience, of whom you are asking a favor. I think you’ll find reactions like nbc’s common—and justified. In fact, I would suggest that no one reply to any of gsgs’s comments until he learns to format them competently.
Still unwilling to properly format your postings. If you had done reading on MRC you would have known why it creates halos.
Good luck on your research. As you show no willingness to present your information in a readable fashion, I see no reason to further hold your hand.
Well, I guess that making readable posts was beyond the abilities of gsgs—too bad.
i thought you might be interested.
Mara and Garrett are being discussed at the Washington Times part 2 of the Zullo/Aarpaio story.
Here is what Papit said:
I wonder if he was interviewed before he got schooled here or he is just lying?
I’m sure he doesn’t think he got schooled here—he thinks that since no one here could tell him the precise method used to generate the LFBC, nor give him an example of the hyper-specific nature he demanded (which he would most likely have dismissed on a made up technicality had it been proffered…), that his argument wasn’t completely debunked (as, in his mind, it wasn’t). I think it wouldn’t have mattered whether or not this interview was before or after he was schooled here—he would have done it exactly the same in either case.
In fairness, I don’t think his latest paper has really been refuted; at least not in a clear, concerted and decisive way —
So, how do you refute someone’s claim that basically goes “I couldn’t do something” anyway? Prove that they could have?
I think that’s been a bit of the difficulty in there being a clear, immediate, authoritative refutation.
The other part is: He’s asserted that the optimization methods in use aren’t capable of producing the effects noted. Personally, I think the claim is close to being invalid on its face. But I can’t really say that without giving some good reasons as to why.
The whole situation seems to lend itself to a short-term lack of decisive clarity. I suspect it will move to a point of clarity that this claim is as cracked as the large stack of other birther claims that have been pretty much relegated to the dustbin among those who’ve looked at them rationally, but which continue to reverberate among the severely reality-challenged. But I don’t think we can necessarily say that quite yet.
I haven’t had time to read the whole thing yet, but…http://www.obamaconspiracy.org/wp-content/uploads/2011/07/Zatkovich-Obama-PDF-report-final.pdf
“All of the modifications to the PDF document that can be identified are consistent with someone enhancing the legibility of the document.”
Maybe I’ll have some time later tonight to look it over.
Are you working on something in retirement?
I’m not at all eager to start sorting through all of the technical writings available — for example there are around 100 patent filings on various aspects and variations of mixed raster content optimization, plus who knows how many published, readily available technical papers. You shouldn’t count on me to invest a good deal of additional time at this point to refute the “optimization won’t do this” claims by birther Papit.
That said, I would think that somebody ought to be able to authoritatively evaluate Mr. Papit’s claims, and I suspect that an authoritative evaluation is likely to end up by demonstrating that his technical claims are entirely invalid.
I do know of one person in particular who I’m hoping will weigh in on the technical aspects. We’ll see.
I also have a draft post, which was already getting close to finished when Papit appeared here to crow that he’d “proven” that the PDF had been “tampered with,” that I may attempt to actually finish. That post sort of bypasses Papit’s technical claims altogether. At this point, though, I think that someone ought to address the major claims made in his paper.
Reality Check says: “I wonder if he was interviewed before he got schooled here or he is just lying?”
I’m waiting for someone to post all the points that tear down his argument there. Otherwise, I’ll do it tonight after I get back from dinner with my sis who is visiting from Florida.
The problem is that Pepet was EXTREMELY vague in all his claims. He refused to get into anything specific, and when he did he got schooled hard so he refused to even acknowledge the failure and went right back to being vague.
Claim 1, He is a superdooper computer expert with lots of computer degrees and experience and just more capable of understanding this stuff than mere mortals like us.
Claim 2. He ran loads of “tests”. No specifics.
Claim 3, MRC will NEVER create more than 3 layers. He even cited 3 papers on this. This is the most specific claim he made. Unfortunately, the very papers he cited dismissed this notion and he got drowned in technical data stating that this was complete crap.
Claim 4. Several ALWAYS means 3. His desperation defense of his theory. dismissed with the dictionary definition of several.
Claim 5. Mac Prievew was NOT used to make it and He KNEW it due to his superdooper knowledge. Also mac Preview will never create multiple layers. This lead to the MRC discussion.
Claim 6. Why did everyone use an Adobe product when an adobe product was not used. This was his constant refrain in mocking John. Vell for one thing, it disproved his claim that no program would ever create multiple layers and other birther experts had looked at the metadata and said it was created with an adobe product. Which brings us to
CLaim 7. He has looked at the metadata, and its TOTALLY OBVIOUS what program was used, and the fact that we could not see it means that we are blinded bu our ideology and hatred. Sadly the other birther experts have looked at the same metadata and drawn completely different conclusions. So his claim is logically inconsistent
Claim 8. We are all big poopyheads and stupid as we don;t worship him.
Claim 9. Did I mention that Pepet is like totally knowledgeable beyond our understanding?
Well, I must admit it’s a bit difficult to argue with a superdooper computer expert.
The thing that puzzles me is the significant number of times Mr. Papit has made assertions that are totally, unreservedly crack-pot wrong.
But I guess that just shows I really don’t fully understand superdooper computer experts.
If he is a sooperdooper computer expert and did loads of tests, then he should provide his loads of test scenarios and scripts. Show your work.
I don’t think you have to show that stuff if you’re a superdooper expert. I mean, what makes you think you’re worthy of him having to provide it? Garrett said it. That oughtta be enough for you.
Of course if he ever did release his tests, all of us non-sooperdooper computer experts couldn’t possibly understand such things. I mean we are talking about computers, and they are so much harder to understand than General Relativity and Quantum Mechanics. May God have mercy on our non-sooperdooper computer expert souls.
Right. So for him to release his work is kinda pointless anyway, since only sooperdooper computer experts would even understand it. And if you’re a sooperdooper computer expert like Mr. Papit, well, obviously, you’re gonna agree with him.
So really — never mind him showing his work — we ought to be grateful that he even shared the results with us. 😉
you can just write a software that does what the WH-pdf does. Some scanner producer might have written it.
Maybe the WH used a foreign scanner and now doesn’t want to admit it since in USA some politicians call to use US-products rather than foreign products.
Maybe the WH returned the scanner meanwhile because it produces such big halos.
I’m posting my picture-analysis here now:
vbulletin forum with long lines, [code] – tables and pictures
you can just write a software that does what the WH-pdf does. Some scanner producer might have written it.
Yes, you certainly and absolutely can write software that produces exactly the effects shown in the White House PDF. To do so is simply a matter of writing the software.
The immediate question then becomes: Is it even halfway reasonable to think that somebody did write software that optimizes a file in the way in which the White House PDF is optimized? And the answer to that, as far as I can see, is: Yes. Absolutely.
In fact, it very much appears to me that a file processed into the kinds of “layers” seen in Obama’s PDF is going to take up less storage space than a file processed using a program that slavishly adheres to the basic MRC optimization model presented by Papit.
Well… since the entire purpose of optimization is to reduce file size while maintaining a legible image, then why on earth wouldn’t a software engineer write a program that produces the artifacts seen in Obama’s PDF? Not only is such a scenario plausible — it absolutely appears to me to be downright likely.
Contrast that with Papit’s evaluation of the same scenario. He claims it’s impossible. To my eye, it is not only possible — given the goal of optimization, the goal of software engineers writing such programs, and the fact that such an algorithm saves additional space beyond the standard basic MRC model– it is quite likely.
it makes no big difference whether you put all the text in one layer rather than in 6. The colors are similar. (dark)
And the white dots are not really needed.
Then you have 2 layers instead of 3, as MRC usually has –
even one layer less than Papit wants.
The subsequent optimization would be almost the same
since the 6 layers don’t overlap, they can be merged.
The colors are ‘similar’ but not the same and merging would increase the size significantly.
Keep up the good work..
you can handle this subsequently.
Just remember colors of subareas
in a small list, then put all in one layer,
compress as black,then reassign the colors
from the list. Exactly the same result and
not much place needed.
If you put all the layers into one, then you lose track of where the subareas of different color are. In that scenario, you have to go back to having a full foreground layer.
If you want to eliminate the foreground layer and use a simple small list of colors, then you have to have multiple bitmasks. It’s as simple as that.
just write it down.Layer n starts at coordinates (x0,y0) and ends at coordinates (x1,x1) and has color (b,g,r).11 Bytes per layer.
Well, I suppose you could do it that way. It’s probably about 6 of one, half a dozen of the other.
The size of the bitmap is significantly increased with ’empty/zero’ bits. Think about it. Placing all these subsets into one big one comes at a cost.
Even a 10k space saving would make it worth doing
So what you’re saying (for Garrett and all the other birthers out there) is that if an algorithm makes for a smaller file than the “basic” algorithm (like, say, might be expected out of any piece of production software…), then it is reasonable, or even likely, that the algorithm would be in use somewhere. In other words, whether or not the exact method can be reproduced, we can determine if the method is better or worse than some standard (like MRC). Of course they will just willfully ignore this along with everything else they don’t want to hear (or don’t understand).
The goal of optimization software is to make file sizes smaller without sacrificing too much legibility.
Let’s say you have a basic model for how this is done. The model uses 3 layers: a background layer, a bitmask layer, and a foreground layer.
If a software engineer decides he may be able to save some additional space by using multiple SMALL bitmasks — and notice this: in Obama’s PDF the COMBINED size of ALL of the bitmask layers would appear to be LESS than that of one single full-sheet bitmask layer — and then completely eliminate the foreground layer, substituting several simple color values (in the case of Obama’s PDF, eight of them) … well then, why not?
Such a program might be written even if the programming theoreticians and academic community should consider such a tweak to be bad coding form. Not all code conforms to the standards held up as best practice by the academicians. Heck, I’ve worked on other people’s code that was literally astonishingly… um… how can I say this somewhat charitably? Highly divergent from best practice.
That fact alone may well be enough to invalidate Mr. Papit’s thesis.
I don’t feel that this was done to push the compression rate. The algo is simple and easy to implement and makes sense, but is apparently unusual. Maybe they didn’t use the usual method because of copyright problems, maybe they didn’t even know about it and developed their own method. (they = the persons who wrote the software that came with the scanner (driver on a CD ?) or was integrated in the scanner. The real increase in compression would have come, if they had predicted the security paper pattern by periodicity.
“The real increase in compression would have come, if they had predicted the security paper pattern by periodicity.”
How, pray tell, are you going to use periodicity to “predict” the security paper?
As someone who writes code that sometimes diverges from best practices and who has a penchant for homages to the Rube Goldberg school of interface design, I agree with you that this is probably sufficient to invalidate Mr. Papit’s work.
I have been reading enough to ask stupid questions about MRC compression. How do we know the “layers” identified by Adobe Illustrator when it deconstructs a pdf file are the same “layers” generated by a MRC algorithm? Perhaps the background layer remained intact while other layers might be split by Illustrator in some way as they are separately compressed.
Well, you’ve completely failed to ask a stupid question here, because that is a pretty good question that I’m betting gsgs can’t answer in any way except “we don’t know”.
you can peek into the pdf and the layers are there, encoded as 9 separate binary streams.Together with 3 other streams. The bitstreams have a header telling us something about them.There are programs to extract the layers from a pdf as image-files.I used pdftoimages.exe from the xpdf package.
Thanks. I was never curious enough to download the Illustrator demo to see what it did exactly. The bottom line seems to be that the layers seen in the LFBC are constructed as one would expect from a compression tool like MRC where text and simple elements are separated from image like elements. I can see now why some early analyses mentioned OCR. MRC compression looks for text to pull out to its own layer where it can be compressed nearly without loss in a bit mask at a high ratio using JBIG2 compression.
the separation was first done by color, then by connected components,
as is necessary for OCR.
But then the components were compared to each other and similar ones were
replaced by identical ones, so to improve the compression.
Apparently no letters were assigned to the components as in OCR, they were not
compared to a database of letters.
However the then used flate-compression could not really make use of the previous
replacements and was not very effective. This could be because the identical
components were too distant and flate only looks 32768 bytes backward.
With other flate-like-compression tools I could compress the main textlayer to ~45000
bytes instead of ~60000. ~35000 should be possible with a precompiled list of
But most bytes (~300000) were required for the compression of the security
paper background. Although this was only done at half the resolution.
With full resolution it might have been ~1200000 bytes.
With an artificial,periodic security-paper background like this:
you could get this down to ~10000 bytes.
JBIG2 wasn’t used to compress the final pdf with Quartz-preview.
The 8 textlayers were compressed with flate (lossless) and the
background layer with DCT (lossy, JPG ~65% I estimate)
Maybe it was compressed when preview opened it, we don’t know.