In the original story, it is stated that “there are no two identical books”. Due to some reasonable limits of computation, this site reduces that to “there are no two identical pages”.

Of course then it follows that in this re-creation there are no two identical books, but this also means that searched content cannot span over more than one page. If you search for some text and the following page is gibberish, then in this implementation it cannot also exist followed by a page of continuing prose.

The number of possible unique books is 291,312,000 — 29 characters in our alphabet to the power of the number of characters per book, 1,312,000, gives this many permutations. By comparison, there are thought to be around 1080 atoms in the observable universe. This number of unique books is so incomprehensibly huge, it may as well be infinity.

The way that this implementation works is to generate unique random numbers, and then turn those numbers into the content on each page. To be able to do this on a book-by-book level, we would have to work with numbers 1,312,000 digits long — which as far as I know, is just not manageable, even using specialised 'big number' data types and libraries.

So instead, we have to work on a page-by-page level. There are 293200 possible unique pages, which while still a very large number, is much more manageable. We can reliably work with numbers 3200 digits long, and use them to generate unique and random 3200 character pages.

This unfortunately leaves the vast majority of possible books unreachable. They cannot exist in this implementation, and thus the library is not complete. If you have some ideas as to how a complete library could be achieved, please reach out! You can get in touch via email at I still strive to come up with a way to build a complete library.