Transformation functions and the paradox of Utopia


A standard narrative of the utopians can be somewhat formally deduced as such:

(1) There exists a current state of the world, say, S(t), with the life of all people being sub-optimal, say, avg(P(t)) <<< THRESH; where P(t) is the wellness of people in S(t) and THRESH is some global threshold of wellness of all the people in the world.

(2) There will be a better ideal utopian world S(t+1), where P(t+1) >= THRESH or ideally, P(t+1) >>> THRESH.

Given the above two statements as the axiomatic statements of the utopians, the missing element seems to be an understanding of the nature of the function that transforms S(t) into S(t+1). Clearly, S(t+1) is dependant on S(t)? And clearly we must need multiple iterations t, t+1, t+2,... t + n until really S(t+n) > S(t); because it might be impossible that one epoch yields a better S?

S(t+1) = transform_world(current_state = S(t))

def transform_world(current_state: S) -> S:
  # what happens here?
  # does this have side effects?
  # how does it affect P(t)?

Then, ∀ p ∈ P, what will p(t+1) be? The question of course is; what is acceptable? Must it be that p(t+1) > p(t) ∀ p ∈ P? or is it good enough that for i ∈ P and j ∈ P, j(t+1) >>> i(t+1), where len(i) >>> len(j), however of course satisfying the utopian condition of P(t+1) = i(t+1) + j(t+1) >>> THRESH >>> P(t)?

If you know of any literature that extensively talks about the algorithm of the tranformation function itself, i.e. the actual inner details of transform_world() and the resulting S in each iteration than the usual ones which usually state that S(t) is bad, but S(t+1) can be better immediately -- then please write to [email protected].

If you wish to contribute to develop this thought experiment further, write to [email protected]; or discuss this post on /r/philosophy.