**Summary:** My question, in a nutshell, is how we should intuitively imagine a generic real number (as opposed to a random one), and whether we can construct numbers which empirically behave like generic numbers in the same way that $e$ or $\pi$ behave empirically like random ones. I hope this is not too vague, informal or philosophical for MO. Let me explain what I'm asking in greater detail.

**Background:** First recall the classical "duality" between (Lebesgue) measure and (topological) category: a subset of $\mathbb{R}$ is said to be "negligible" iff it is of Lebesgue measure zero, and "meager" iff it is contained in a countable union of nowhere dense closed sets (i.e., closed sets with empty interior). By Lebesgue measure theory, resp. by the Baire category theorem, a negligible, resp. meager set has empty interior. Both are "small" in a certain sense, but in an incompatible way since $\mathbb{R}$ is the union of a negligible and a meager set (as a simple example is given below). There is also a classical theorem by Erdős (refining an earlier result by Sierpiński) showing that, under the Continuum Hypothesis, there is an involution of $\mathbb{R}$ to itself which takes negligible subsets to meager ones and conversely.

[What follows is badly written: jump to "edit/clarification" below for an attempt at saying things more clearly.]

There are various meanings of the word "random", but the general flavor is that a real number is random iff it does not belong to a negligible Borel set which can somehow be described or coded in a simple way (e.g., random over a transitive model of set theory means that it does not belong to a negligible Borel set coded by a sequence in that model; but there are some weaker meanings of "random" where we forbid belonging only to negligible Borel sets of simpler description, e.g., those which can be described by a code computable by a Turing machine). For example, a random number will be normal in every base, because the set of real numbers which are not normal in every base is a simply described Borel set which is negligible.

Now a real number like $e$ or $\pi$ is not random even in the weakest sense, because it is, well, equal to $e$ or $\pi$, and that is not random (it belongs to the Borel set $\{e\}$ or $\{\pi\}$ which is negligible and certainly computable). It does seem to be, however, "empirically random", in a sense that we (or at least, I) don't know how to make precise, but the idea being that it won't belong to any simply defined negligible Borel set which hasn't been specifically constructed to contain it. For example, it is conjectured that $e$ and $\pi$ are normal in every base: we expect their decimals to pass statistical tests of randomness. The same holds for a huge number of "naturally defined" real numbers (and not just real numbers: see this question for another case). Philosophically, it is also generally expected that the real number whose binary expansion is obtained by flipping an unbiased coin (or, better, taking some physical source of randomness) will be random in a strong sense. So, anyway, we have a good intuition of what a random number feels like.

The dual notion of a "generic" number, however, is more obscure: a real number is generic iff it does not belong to a meager Borel set which can be somehow described or coded in a simple way (e.g., belonging to a transitive model of set theory, or Turing-computable, or something like this).

Here is an example of something we can say about generic numbers: call a real number (between $0$ and $1$, say) an "oft-repeater in base $b$" iff its expansion in base $b$ repeats an infinite number of times *all* the digits up to that point. In other words, there exist arbitrarily large $n$ such that the digits $c_n$ to $c_{2n-1}$ are equal to $c_0$ to $c_{n-1}$. It is easy to see that the set of oft-repeaters in base $b$ (and therefore, in any base) is negligible but comeager (=contained in a countable intersection of open dense subsets): so a random real number is *not* an oft-repeater in any base, but a generic real number *is* an oft-repeater in every base.

We don't know this either way, but I don't think anyone would seriously conjecture that $e$ or $\pi$ is an oft-repeater in any base: clearly we expect them to be empirically random and *not* empirically generic. Or in other words, we expect measure theory to be a better predictor of what $e$ and $\pi$ behave like than category. An "empirically generic" real number, however, would be (among many other things) an oft-repeater in every base (and it would *not* be normal in any base: for example, in a generic number, there are arbitrarily large $n$ such that all digits $c_n$ to $c_{n^2}$ are zero — the set of normal reals is meager).

**Questions:** So, my questions are something like this:

Are there "naturally defined" real numbers which are "empirically generic" rather than "empirically random"? Or at least, can we give some examples of (non-"naturally defined") such numbers?

Is there some kind of process (physical or idealized), analog to throwing a coin, that would produce a (somewhat!) generic real number?

Is there a philosophical argument explaining why measure theory predicts better than category how the numbers naturally encountered in mathematics behave? Why should we expect $e$ and $\pi$ to behave more randomly than generically (when, in fact, they are neither)?

How can one intuitively visualize a generic random number? (I think I can picture a random one, and the idea of it being normal makes sense, but the fact that a generic random number has infinitely large $n$ such that all digits $c_n$ to $c_{n^2}$ are zero seems very difficult to imagine.)

How would one even test empirically if a given real number is generic? (Assume you have a true generic oracle and a fake one: how would you proceed to detect which is the true one? Can we have a "genericity test" like we have randomness tests?)

Perhaps the idea that genericity should behave symmetrically to randomness is naïve: please don't hesitate to tell me why this is naïve!

**Edit/Clarification:** The above discussion was probably too messy or informal. Let me try to give a clearer restatement:

**Definition:** If $\mathscr{T}$ is a set of Turing degrees, a real number is said to be $\mathscr{T}$-random, resp. $\mathscr{T}$-generic, iff it does not belong to any negligible, resp. meager, Borel set which can be coded by a sequence whose Turing degree is in $\mathscr{T}$ ("coding" of Borel sets being done, say, as in Jech's *Set Theory*).

If $\mathscr{T}$ is the set of degrees belonging to some transitive model $\mathfrak{M}$ of ZFC, the reals in question are said to be random over $\mathfrak{M}$ resp. Cohen/generic over $\mathfrak{M}$ (cf. Jech, definition 26.3 and lemma 26.4 in the Third Millennium edition). If $\mathscr{T}$ is simply the degree $\mathbf{0}$ of Turing computability, I think we get a definition equivalent to Martin-Löf random numbers, and something analogous for "generic": of course, this is a much weaker property than being random, resp. generic, over a model of ZFC. We could perhaps define even weaker versions of "random", resp. "generic", by replacing $\mathscr{T}$ by a set of finer degrees, maybe primitive recursive degrees (but if the degrees are too fine, then the definition will become too sensitive on how Borel sets are coded and probably not the right way to proceed).

**Question number 1:** While I think I have an intuitive grasp of how a random number real behaves (irrespective of what $\mathscr{T}$ is), e.g., by imagining an coin being tossed an infinite number of times, the corresponding "generic" notion is much more obscure. Is there some way to picture it intuitively?

Now there is the matter of numbers like $e$ and $\pi$. Of course **these numbers are not random (nor are they generic)** in the sense of the above definition, or even of any reasonable weakening I can imagine.

Nevertheless, $e$ and $\pi$ behave *in certain ways* like random real numbers, and I claim that they behave "more like random reals than like generic reals". For example, if we are to make a conjecture as to the lim.sup. and lim.inf. of the sequence $\frac{1}{n}\sum_{k=0}^{n-1} c_k$ where $c_k$ denotes the $k$-th binary digit of $\pi$, and if Pr. Eugsebel predicts "I conjecture that the limit is $\frac{1}{2}$, because the set of real numbers for which this is the case is of full measure", while Pr. Eriab predicts "I conjecture that the lim.inf. is $0$ and the lim.sup. is $1$, because the set of real numbers for which this is the case is comeager", then experimentally, it appears that Pr. Eugsebel is right and Pr. Eriab is wrong: measure theory seems to predict the *empirical* behavior of the decimals of $\pi$ correctly, and category does not; alternatively, $\pi$ behaves *empirically* like a random number (even though it is not at all random!), in this limited respect, and it *does not* behave empirically like a generic number.

**Informal definition** (which probably cannot be made rigorous): Say that a real number is "empirically random" when it behaves like a random real number for this kind of simple tests. (Perhaps "pseudorandom" would be a better term for this.) For example, an "empirically random" real number should, at least, be normal in any base (note that the set of real numbers that are normal in any base is of full measure). Analogously, we want to define a number to be "empirically generic" (or "pseudogeneric") when it behaves like a generic real number. For example, an "empirically generic" real number should at least be an "oft-repeater" in any base (meaning that there exist arbitrarily large $n$ such that the digits $c_n$ to $c_{2n-1}$ are equal to $c_0$ to $c_{n-1}$); also, the lim.sup. and lim.inf. of the sequence $\frac{1}{n}\sum_{k=0}^{n-1} c_k$ where $c_k$ denotes the $k$-th binary digit of the number should be $0$ and $1$ (note that the set of real numbers satisfying these criteria comeager).

**Question number 2:** Whereas $e$, $\pi$ and many others can reasonably be conjectured to be "empirically random", is there, dually, any real number that has been explicitly defined in mathematics that one can reasonably expect to be "empirically generic"? Or could one be defined?

(By "explicitly", I mean to forbid something like "take some number outside of the union of all meager Borel sets with a computable code": this would indeed define a generic real number, not just an "empirically generic" one, but this is not explicit by any means.)

The underlying philosophical question is something like this: "How come is it that randomness appears to be a much more natural notion than genericity?" But I don't really expect anyone to have an answer to that.

e.g.$\pi$ is "random", when it is definitely not algorithmically random. Sorry if I misunderstood though. $\endgroup$eare emphaticallynotrandom, but rather that they (conjecturally) pass "empirical tests" of randomness, such as normality, while they fail empirical tests of genericity. $\endgroup$2more comments