. 3 min read
1 Consider a world W
where there are many x
s (such that x ∈ X
). In this experiment, we define the winner of the game as having the maximum M
value (think of it to stand for “money”, perhaps), i.e. more formally, x(i)
is the winner at some time t
if max(M(x0,t), M(x1,t), M(x2,t), ... ,M(xi,t), ...) == M(x(i),t)
.
Let’s say,
(1) Each x
can utilise some strategy S(x,M)
to increase their M
value between t
and t+1
, i.e. M(x,t+1) = S(x,M(x,t))
such that M(x,t+1) > M(x,t)
.
(2) Since this is an unfair experiment, each x
has a different initial M
value, i.e. M(x=x(i),t=0) != M(x=x(j),t=0)
. In some extreme cases there could, of course, be a very large difference, M(x=x(k),t=0) >>> M(x=x(q),t=0)
.
(3) x=x(i)
’s strategy function S(x(i),M(x(i),t))
at some time t
may or may not have adverserial affect on another x=x(j)
’s strategy function S(x(j),M(x(j),t))
. Therefore x(i)
and x(j)
must know that their strategies can adverserially affect each other to one’s benefit and the other’s decline.
The question then that this thought experiment proposes is this — is only the knowledge of the above three statements enough to win the game in world W
(i.e. to pick the best strategy S
in each epoch t, t+1, t+2 …)?
2 A standard narrative of the utopians can be somewhat formally deduced as such:
(1) There exists a current state of the world, say, S(t)
, with the life of all people being sub-optimal, say, avg(P(t)) <<< THRESH
; where P(t)
is the wellness of people in S(t)
and THRESH
is some global threshold of wellness of all the people in the world.
(2) There will be a better ideal utopian world S(t+1)
, where P(t+1) >= THRESH
or ideally, P(t+1) >>> THRESH
.
Given the above two statements as the axiomatic statements of the utopians, the missing element seems to be an understanding of the nature of the function that transforms S(t)
into S(t+1)
. Clearly, S(t+1)
is dependant on S(t)
? And clearly we must need multiple iterations t, t+1, t+2,... t + n
until really S(t+n) > S(t)
; because it might be impossible that one epoch yields a better S
?
S(t+1) = transform_world(current_state = S(t))
def transform_world(current_state: S) -> S:
# what happens here?
# does this have side effects?
# how does it affect P(t)?
Then, ∀ p ∈ P
, what will p(t+1)
be? The question of course is; what is acceptable? Must it be that p(t+1) > p(t) ∀ p ∈ P
? or is it good enough that for i ∈ P
and j ∈ P
, j(t+1) >>> i(t+1)
, where len(i) >>> len(j)
, however of course satisfying the utopian condition of P(t+1) = i(t+1) + j(t+1) >>> THRESH >>> P(t)
?
If you know of any literature that extensively talks about the algorithm of the tranformation function itself, i.e. the actual inner details of transform_world()
and the resulting S
in each iteration than the usual ones which usually state that S(t)
is bad, but S(t+1)
can be better immediately, or if you wish to contribute to develop the above two thought experiments further, write to [email protected].