archive-nl.com » NL » M » METAFYSICA.NL Total: 972 Choose link from "Titles, links and description words view": Or switch to
"Titles and links view". |

- General Ontology XXIXm2

determination of the diverse concrete cases of a given Layer of Being This identity same set of general categories determining each individual concrete case is only then violated when passing over into another Layer of Being either downwards or upwards But then the same applies with respect to the general categories of that new Layer In fact things are as follows That there are Layers of Being at all each with a typical coherent ensemble of categories or categorical pattern and in this way differing from each other is already a function of our presently being discussed Law of Categorical Connectedness or Union First Law of Categorical Coherence The first Law of Coherence is a basic law But because it deals with the phenomena of categorical coherence still from the standpoint of determination that takes place upon a concrete case thus fully involving principle concretum determinations its nature is still close to the Laws of Applicability that have been discussed earlier The first Law of Coherence therefore represents the transition from the Laws of Applicability to the true qualitative mutual connectedness of the general categories of one and the same Layer of Being The fourth Law of Applicability says that all general determination of concrete cases comes from the categories of their Layer while the first Law of Coherence says that all general categories of the given Layer are involved in the determination of each concrete case of the Layer So these two Laws supplement each other Taken together they say that the whole of the general elements contained in the overall pattern of determination of a whole Layer of Being is based on the all pervading identity from case to case and coherence of the corresponding layer of categories The Special Categories on the other hand are more or less dispersed across the total field of concrete beings of the given Layer They determine further the already generally determined collective concretum Generally the If component of a given particular special category does not when satisfied at all pervade by implication the whole Layer despite the fact that this Layer is that category s range of relevance Whether the If component is satisfied or not i e whether a sufficient ground for the Then component which component is a particular being so and so is present or not present in the sense of actually existing or not so existing depends on the actual state of the totality of concrete beings and processes representing the concretum of the whole categorical Layer This state implies certain specific local patterns to be present some of which might represent a sufficient ground of the Then component of the category resulting in the realization of the category s concretum Let us now discuss the second Law of Coherence The Law of Layer unity The general categories of a given Layer of Being form a coherent unity not only by their determining in unison but also in virtue of themselves i e in virtue of their own content nature or structure A single general category only can exist which here means only can apply insofar as the other general categories of the Layer exist apply Their connectedness in their acts of determining is based on their own qualitative or structural connectedness There are no isolated general categories The Law of Categorical Connectedness expresses the coherence first Law of Coherence as it were from without namely as complex determination determination in unison Such a way of determining complex determining must however already be based on an all out relationality within a category Layer The present Law of Layer Unity and the next two Laws of Coherence deal with this relationality in the framework of the general categories of a given Layer of Being These three Laws of Coherence can therefore in contrast to the first Law of Coherence be denoted as coherence from within or as Laws of Intercategorical Relation So with respect to their content the general categories of a given Layer of Being belong together They do not have independent contents Also here this does not apply to the Special Categories Their content is largely independent of the content of the other special categories However it is to be expected that the content of any given special category presupposes all the content of all the general categories of the Layer in which that special category occurs The third Law of Categorical Coherence is the following The Law of Layer wholeness The unity of a Layer of categories i e of the total set of general categories of a given Layer of Being is not the sum of its elements not the sum of those individual categories but an indivisible wholeness that is prior to its elements The Layer wholeness consists of the mutual dependence of its elements Such a wholeness is not the same as unity It is unity in a much stronger sense The general categories of a given Layer are not just dependent on some other categories they are mutually dependent on each other which means that they can only originate all together at once Although mutual dependence as such and thus the wholeness of the set of general categories could be the case also beyond the given Layer it does not do so That is to say it does not extend beyond a given Layer of categories determining the corresponding Layer of Being which limit is expressed in the fourth Law of Coherence The Law of the Limits of the Range of Mutual Dependence The mutual dependency of the non special categories and their structural mutual connectedness do not extend beyond the limits of one and the same Layer of Being They are limited to the totality of categories of precisely one Layer This means that although categories of other Layers could be more or less similar to categories of the given Layer they are never identical So non special categories always are general categories but general only with respect to precisely one and the same Layer All this further implies that the Layers of Being are fundamentally different from each other They extend each for themselves precisely as far as mutual dependence and structural mutual connectedness of their categories extend If this Law would not hold then all categories that is to say all categories of the whole of Reality would be mutually dependent and structurally connected i e having similar structure Then also higher categories for example those that are specific to the Psychic or still higher Layers would co determine lower beings resulting in say a mechanical motion additionally governed by purpose intention predestination etc And in the same way Inorganic beings would then considered to be alife Finally we have arrived at the fifth Law of Coherence The Law of Implication The wholeness of the given Layer of categories reappears in every one of its members Every individual general category implies the other categories of the same Layer Every individual general category has its essence outside itself in the other categories as well as within itself So the coherence of the category Layer is present in every one of its members as well as in the whole The whole set of categories is as it were present in every individual member This Law does of course not hold for the Special Categories The latter exclude each other instead of implying each other Implication of categories is the functional intrinsic structure of Categorical Coherence The implication is not an implication as it is used in Logic Were that the case then the mutual implications of categories would mean their equivalence Categorical implication of one category by another means in fact that the one category necessarily involves the other In a way such mutual involvement of categories i e the involvation of all general categories of the given Layer by every one general category of that same Layer is fairly evident in itself Speaking about general categories these categories are general with respect to a given Layer which means that they all enter the set of categories that determines any given collective concretum of that Layer Only then this concretum is generally determined and only then it truly belongs to that given layer If in some one set of categories determining some collective concretum one such general category were missing then this category would not be a general category But what we have demonstrated here is the coherence of general categories from the standpoint of their determining a concretum The Law of Implication however means that the categories also qualitatively i e as to their intrinsic content involve each other which should be shown for every general category of a given Layer We will do this for a few instances Well then let us again suppose that Space spatiality i e something is so and so in this case it is spatial Time temporality i e someting is temporal Process process nature something is such that it has process nature Substance something is that which remains identical and present during a process of change and Causality some pair of entities is the necessary and repeatable succession in time of two states cause and effect and some more principles are general categories of the Inorganic Layer of Being Then the following implications in the sense of involvations are directly evident Time involves Process because time originates from processes Process involves Time because a process is a succession of states in time Process involves physical Space because process as it is in the Inorganic Layer involves material bodies and these in turn involve physical space Space involves Time because physical space involves material bodies empty space cannot physically exist while these in turn are products of generative processes which latter involve time Process involves Substance and vice versa Process involves Causality and vice versa Causality involves Time and vice versa that is to say time involves process and process involves causality etcetera Let this be for the time being enough to illustrate what is meant by the qualitative connectedness of all the general categories of a given Layer of Being As has been said the Special Categories do not follow the Laws of Coherence They generally exclude each other It is true however that the general nature of a given Layer as it is determined by its corresponding ensemble of General categories sets limits as to what special categories can occur at all in that given Layer So we do not expect for instance that symmetries as defined by geometric symmetry transformations rotations reflections etc occur in the Psychic or Super psychic Layers of Being The mentioned geometric aspect of true symmetries as defined by geometric transformations assign such symmetries to either the Mathematical Layer or the Inorganic Layer or the Organic layer So there is some connection between General categories on the one hand and Special categories on the other Laws of Categorical Stratigraphy HARTMANN Ibid p 472 According to the third Law of Applicability every Layer of Being has its own categories Further we know that the coherence of general categories has its seat in the horizontal dimension of the system of Layers that is to say it only reigns within a given Layer of Being Seen so far the Layers would be totally independent of each other which obviously is not the case because all the Layers together make up the one World So there must be certian relations between the Layers based on certain vertical relations between their corresponding categorical ensembles These relations cannot consist in coherence again because then the Layers would vanish altogether resulting in one Layer only which itself is then based on one single coherent ensemble of categories So there must be stratigraphical laws laws about the relations between layers that are independent from implication which latter is the functional intrinsic structure of categorical coherence And while indeed the existence of a system of Layers was presupposed in the previous categorical laws we must now investigate what precisely this layering of the World is all about and this boils down to inquire into the mentioned vertical relations between the categories of different Layers of Being which relations we just had called stratigraphical laws still other vertical relations between categories are the Laws of Categorical Dependence which will be discussed later on Next we will give the common principle of the Stratigraphical Laws General Principle of Categorical Stratigraphy General categories of lower Layers of Being are often contained within those of the higher Layers but never the other way around that is to say categories of higher Layers are never contained in those of lower Layers This General Principle indicates that the vertical relations between categories are totally different from the horizontal relations coherence because the latter are mutual i e reciprocal relations while the former are one way relations The General Principle of Categorical Stratigraphy can be resolved into four Laws of Categorical Stratigraphy which like the Laws of Coherence only together form a single uniform but complex regularity Separated from each other they remain one sided and could result in confusion masking their content The first Law of Categorical Stratigraphy is The Law of Reappearance Lower General categories reappear in the higher Layers as kernels of higher categories They are categories that once having appeared in some Layer do not disappear again in the subsequent higher Layer or Layers but pop up again and again The whole line of such reappearance presents itself as an uninterrupted passage through the higher Layer s This reapperance however is a one way phenomenon The higher categories do not reappear as elements in the lower categories and thus in the lower Layer The categorical reapperance is irreversible Moreover not all general categories reappear in all subsequent higher Layers Some break off somewhere along the line Let us give some preliminary examples of the reappearance and breaking off of general categories once they have appeared for the first time somewhere in the Layer sequence as seen from the Mathematical all the way up to the Super psychical A particular process actually going on is determined by a particular nexus category Such a category connects different states or stages in a particular and regular way and is as such a special nexus category As analysed it is a dynamical law of a particular dynamical system This special nexus category determines a particular being so and so represented by the actual process that is going on that is to say this being so and so is a certain sequence of states and as such represents the category s concretum Generalizing all this we can say that a certain not further specified collocation of actual existing entities has process nature And this means that that collocation is determined by the General Category of Process which is the general nexus category where every two consecutive states taken all by themselves represent Causality The concretum of this General category is a regular succession of states which is a case of something being so and so A sufficient ground in fact the disjunctive set of all equally sufficient grounds for this being so and so to be realized is the If component of the category of Process while this being so and so i e a regular succession of states is the Then component of that category This category that is to say the category of Process is indeed a General category because in the Layers where it appears it is involved in the determination of every collective concretum of the Layer Every concrete being or material as distinguished from just a property which as such is a collective concretum i e a concrete entity determined by many categories is involved in one or another process This category is not only a General category of the Inorganic Layer but also of the Organic Layer and probably beyond So after appearing for the first time in the Inorganic Layer it reappears in the Organic Layer But and this is important it reappears not as something identical It reappears as a modified category of Process The living cell provides an instructive example The thousands of chemical reactions taking place at any moment within the living cell constitute a complex interacting web Having studied each individually the logical approach of the reductionist is to attempt to build them up into sequential chains recognizing that the products of one enzyme catalised reaction will immediately serve as the substrates for another But all this is too simple to reflect the reality in a living cell One cannot abstract an individual enzyme reaction from the whole metabolic dance of the molecules so one cannot abstract any single reaction pathway What this implies is that many of the substances participating in a certain reaction chain participate not in one but in many interacting pathways and the factors which may influence the rate of any individual enzyme reaction then multiply dramatically Once the metabolic web reaches a sufficient degree of complexity it becomes strong stable and capable of resisting change The stability no longer resides in the individual components the enzymes their substrates and products but in the web itself The more interconnections the greater the stability and the less the dependence on any one individual component This is molecular democracy The cellular web has a degree of flexibility which permits it to reorganize itself in response to injury or damage Self organization and self repair are its essential autopoietic properties autopoietic means self making The metabolic organization of a cell is not merely the sum of its parts and cannot be predicted simply by summing every enzyme reaction and substrate concentration that we can measure For us to understand them as to their significance and function we have to consider the functioning of the entire ensemble It is discovered that there are intracellular messages carried by the ubiquitous signals provided by the calcium ion and that they are propagated as waves pulsing through the living cells In the open system of the cell with a flow of energy passing through it and continual deviations from thermodynamic equilibrium choreography is all See for this ROSE S Lifelines 1997 p 162 166 If we look to one or another biochemical reaction in isolation we are not looking to the cell resp organism anymore but have returned to the Inorganic Layer In the dynamical systems and subsystems special parameter values constants for specific dynamical laws and special initial conditions are set by the organism resulting in directed pathways All the above shows how different a biological process is from an inorganic process Some features like repair as we see them in organic processes can be recognized also in the Inorganic as for instance in the regeneration of damaged crystals but these are only analogies albeit important ones The organic process taken generally is an inorganic process that is over formed resulting in a robust goal oriented organic process The overall metabolic process in a living cell is holistic Of such an holism we only encounter faint images in the Inorganic like we saw in snow crystals So although the General Category of Process appearing in the Inorganic reappears in the Organic it does so in a modified way Categories can in this way reappear in the next higher Layer Some of them keep on doing this resulting in their presence in all the Layers above the Layer of original appearance These Layers then come to relate to each other as matter and form The next higher Layer is an over forming of the present Layer Other categories however seem to break off i e in the next Layer they fail to appear again and also never return in the subsequent higher Layers Where this is taking place the relevant Layers of Being do not relate to each other as matter and form The one is not an over forming of the other but over builds the other In this latter case the Layer that is over built is not the matter of the aforementioned matter form relation but only a material carrier We can compare this with music that is recorded on a CD The CD itself is not in any way part of the music which is evident by the fact that a grammophone record or a magnetic tape can also carry this same music So although the music must have a carrier in order to be displayed the latter can be of any appropriate type or material Because we are mainly concerned with the transition from the Inorganic to the Organic and because here only over forming takes place we shall not deal extensively with the phenomenon of over building The latter is to be expected in the transition from the Organic to the Psychic Layer To give an example in the latter Layer the category of Space is missing i e this category while still present in the Organic has broken off in the transition to the Psychic This case of not at all reappearing of a certain general category is sufficient to let the Psychic Layer of Being be not an over forming but an over building of the Organic Layer of Being Generally we can say Reappearance with concomitant modification of general categories results in the over forming of Layers of Being Breaking off of general categories i e their not reappearing again results in the over building of Layers of Being We can visualize the system of Layers geometrically by means of horizontal dividing lines i e lines dividing the consecutive Layers from each other The categories going through these Layers can then be symbolized by vertical lines starting somewhere in the system and going upwards It is then easy to represent the breaking off of categories by the terminations of lines at one or another layer dividing line With such an image it becomes clear what decisive role in the categorical constitution of the World the reappearance of categories plays Nothing less than the unity and intrinsic connectedness of the World as collective concretum of all these categories amidst the diversity and seemingly dissolving heterogeneity of Being is dependent on this reappearance of categories The main force in this respect is exerted by the Fundamental Categories They pass through every Layer from the very first onwards albeit necessarily accompanied by the respective modifications But the World s connectedness is not limited to these Fundamental categories Also the General categories undergoing interrupted reapperance i e breaking off somewhere and not returning anymore play an important part in this connectedness Although they only pass through a part of the system of Layers they at least connect some adjacent Layers with each other By the totality of such interrupted general categories the World is held together also without the Fundamental Categories because the clamps alternate that is to say where one terminates another begins So the World would as has been said just as well be connected into a unity without the Fundamental Categories but with them the form of the connectedness is at the same time also uniform So we cannot doubt that the Law of Reappearance is in yet a different way as are the Laws of Coherence a fundamental principle with respect to the constitution basic structure of the World Without this Law there would be no observable connection between physical matter and life life and consciousness psychic and spiritual creation A layered or stratified structure can it is true also exist without any reappearance of categories but true interstratigraphical connectedness i e connections between layers cannot do without it The World would be just a jumble of heterogeneous domains of Being having nothing to do with each other when reappearance of categories would be entirely absent The World in which we live has unity But this unity is neither that of a principle say a master category nor that of a center but the unity of an o r d e r and of a coherence the latter taken in a general sense The form of the World s order is its layering while the form of its coherence generally meant is its categorical reappearance of all Fundamental categories in all Layers and of General categories in some Layers The stratified structure of the World is a one way increase of categorical structure from the elementary and simple to the differentiated and complex Were the complex already contained in some way in the elementary the latter would not be elementary at all and the layering would not have the character of a structure that step wise increases its content because then everything would enjoy the same structural height or possess an identical general categorical make up as we see it in fact only in one and the same given Layer A step wise increasing structure can only come about when the higher Layers possess additional categorical determination as compared with lower Layers They could not have this extra determination were the domain of their general categories extending into that of lower Layers Although we concentrate on this website on the transition from Inorganic to Organic expressed by the crystal analogy where all general categories of the former reappear in the latter but in a modified condition resulting in the over forming in contrast to over building of the Inorganic Layer it is nevertheless instructive to again briefly consider the over building of Layers of Being as we see it apparently at the boundary between the Organic and the Psychic subjective spirit and between the latter and the Super psychic objective spirit consisting of human institutions and history It is instructive namely to realize in learning the precise difference between collective concreta concrete individual beings or materials on the one hand and the several Layers of Being with their respective categories on the other HARTMANN Der Aufbau der realen Welt 1940 discusses this on p 494 498 The kernel of this difference between concrete entities on the one hand and Layers of Being on the other is that the concrete entities are themselves layered And this means that in every higher concrete entity all lower general categories i e all general categories determining the lower concrete entities reappear while in the higher L a y e r s of Being the Psychic and Super psychic only a part of the lower categories reappears So although not all but only relatively few lower general categories are contained in modified condition within the categorical ensemble of the higher Layers the ontological sub and infrastructure of these Layers contains all general categories from the bottom up where all lower general categories are concentrated in the substructure of these higher Layers that is to say in their ontological base And the higher Layers cannot exist without this ontological base HARTMANN Ibid p 498 The study of categories should concentrate on the Layers of Being as such and not on the sequence of concrete entities inorganic thing or material organic individual human being society The step wise increase of the latter s complexity expressed as an internally layered structure should be seen as a consequence or expression of the overall Layering of the World as a whole And it is this stratigraphic aspect of the World whose regularities are expressed by the Laws of Categorical Stratigraphy This discussion is in fact about the relation of over building of one Layer by another Whereas reappearance of categories but always with concomitant modification of them results in over forming of one Layer by another the breaking off of categories results in over building of one Layer by another The question in the present discussion is whether such over building really exists at all in the World This over building allegedly takes place first of all at the transition from the Organic to the Psychic Here the category of Space is said HARTMANN to break off implying the absence of physical and organic processes of materiality etc in the Psychic Layer But isn t it more evident that all the General categories of the Organic and therefore also all those of the Inorganic cross the borderline between the Organic and the Psychic because it is for sure that any psyche i e any consciousness presupposes a material organic substrate If we were asked what concrete entity collective concretum represents the Inorganic then we d come up with crystals or materials If we were asked the same question with respect to the Organic then we of course will come up with individual organisms of the plant or animal type And with respect to the Psychic which can be called subjective spirit we will point to higher mammals like chimpanzees Finally with respect to the Super psychic which can be called objective spirit we will point to humans especially to their latest descendants The following diagram depicts all this Figure above The stratigraphical range of concrete entities each one of them representing which does not mean identity a Layer of Being A human being represents the Super psychic Layer of Being category Layer A chimpanzee and like higher mammals but excluding man represent the Psychic Layer of Being category Layer A herring and any other lower animal but also all plants represent the Organic Layer of Being category Layer A salt crystal and any other crystal represent the Inorganic Layer of Being category Layer This scheme as such is certainly correct and reflects the fact that Life presupposes the Inorganic Consciousness presupposes the Organic and Social Institutions presuppose the Psychic Nobody will deny this Therefore it is to be expected that all the general lower categories reappear in all the higher Layers HARTMANN however maintains that the transition from Organic to Psychic and also from Psychic to Super psychic is fundamentally different from the transition between the Inorganic and the Organic by the alledged fact that some general categories break off during the former transition He bases this on the nature of the psychic acts which are devoid of spatiality and of all that the latter entails I doubt whether such isolation when we talk about psychic acts is admissable Maybe this is a case of misplaced concreteness Maybe we just still cannot adequately characterize a psychic act The connection of psychic acts or psychic processes with organic and inorganic basic features appears to be inextricable and so the organic and inorganic general categories are expected to enter necessarily into the ensemble of general categories of the Psychic that is to say it is expected that all general inorganic and organic categories reappear in the Psychic and also together with the psychic general categories reappear in the Super psychic Also our above diagram points in this direction If it is true we do not have to do with relations of over building anymore but only with relations of over forming implying that all general categories once having appeared somewhere in the Layer system keep on reappearing in all the higher Layers accompanied however by modifications and never break off We will not bring this discussion to a final conclusion but leave it open to further inquiry Anyway it is important to distinguish between Layers of Being and their respective category ensembles on the one hand and the concrete entities crystals materials organisms etc representing in some way these Layers on the other These concrete entities again come into the picture when speaking further below about the Laws of Categorical Dependence The Law of Modification The categorical elements are being modified in diverse ways when they reappear in higher Layers The special status that they obtain when taken up into the coherence of such a higher Layer results in their being modified from Layer to Layer For such a category this modification is accidental but for the constitution of the World it is as essential as the reappearance as such As far as the transition from the Inorganic to the Organic is concerned we have with this Law of Modification arrived at the kernel of the crystal analogy We can see the modification taking place in many special categories when crossing the Inorganic Organic boundary such as the condition of Growth the condition of Regeneration etc when we investigate them as they are in the Inorganic Layer and in the Organic Layer The Law of Modification in fact already follows when one substitutes the Law of Implication fifth Law of Categorical Coherence in the phenomenon of reappearance The reappearing category which we also called categorical element enters into the framework of the higher Layer wholeness But in doing so it comes to be subject to that higher Layer s coherence And because this coherence consists in mutual implication in the sense of involvation the lower incoming category must somehow going to get connected with the categorical elements of that higher Layer After all this Law of Implication said that the whole categorical framework of a given Layer reappears in every individual categorical element of that Layer Therefore a category that penetrates into higher Layers must experience some modification with respect to its content We then see some lower category reappearing within a corresponding higher category i e a corresponding category of the higher Layer For instance lower categories of Unity only reappear in higher categories of Unity lower categories of Continuity only reappear in those of higher Continuity lower dynamical system types together with their elements that is to say together with their basic structure only reappear in higher types of dynamical system etc But precisely those higher types are different structures and this difference feeds back to the reappearing categorical element In virtue of the categorical coherence of the given higher Layer in which some categorical element reappears the reappearance indirectly feeds back to the rest of the categories of the Layer But this coherence of the higher Layer is at the same time different from that from which the categorical element came frome and accordingly feeds back onto that element In the case of the Fundamental categories new coherence and modification coincide because they are the general basic elements of the categorical structure of the World at large The total picture of reappearance and modification shows itself when one inspects them with respect to a given group of categories as they extend through the system of Layers as a bundle of divergent lines intersecting the Layers Thereby the uniformity of every such line is the expression of the reappearance itself The progressive divergence on the other hand expresses the modification Qualitatively this divergence consists in increasing differentiation From Layer to Layer new structure under new coherence appears Within this progression the original nature of the reappearing categorical element is more and more masked because of the superimposition of higher structures It can eventually become so poorly recognizable that special analysis is needed to reveal or retrieve it In this way reappearance and modification together result in a categorical connection that not only intersects the coherence of the Layers but also being co determined by it But also in turn this categorical connection having resulted from reappearance and modification co determines equally essentially the categorical content and coherence of the Layers In fact despite all heterogeneity both types of connection the horizontal and the vertical integrate into one categorical system HARTMANN Der Aufbau der realen Welt 1940 p 499 500 Reappearance together with modification is also to be seen in special categories We saw it above with respect to Growth and Regeneration when we compared them as they are in the Inorganic on the one hand and in the Organic on the other But there are also special categories like those referring to Shape Symmetry and Promorph which are where they reappear at all not modified First of all they do not do so as whatness categories secondly they also do not do so as entitative constants In the latter case the If component contains in a

Original URL path: http://www.metafysica.nl/ontology/general_ontology_29m2.html (2016-02-01)

Open archived version from archive - General Ontology XXIXm3

a categorical NOVUM as proposed by HARTMANN in his Der Aufbau der realen Welt 1940 is based on the large qualitative gaps as they are allegedly visible in the general series of phenomena Inorganic Nature Organism Consciousness Objective Spirit It is then assumed that these gaps are the result of corresponding gaps in the categorical structure that underlies these phenomena or in other words the gaps as they are visible in the observable world reflect some corresponding discontinuity in the categorical make up of the World i e in the domain of the World s principles Indeed if we interpret a category as an If Then constant then the structure of that what is determined is in some way already present in the category The alternative to such a Layer theory is a theory that does not assume the mentioned successive appearance of a NOVUM But then it must somehow place all potentialities of development of the World s stuff potentalities to develop life consciousness objective spirit and what not right into the beginning of the history of Being And this means that with respect to initial nothingness it must assume the appearance of a great NOVUM a hyper NOVUM we might say And after that nothing truly new will ever appear again and indeed cannot appear again that is to say all potentially new types of being are already implicitly present in the beginning What we see as development is nothing but the unfolding or explication of that what was already present in an enfolded manner This scenario is the bottom line of the so called Theory of Everything where researchers are looking for one single master equation or master nexus according to which the World unfolds i e according to which things successively turn from an implicated state into an explicated state Because of this one single master equation the whole of temporal reality is then consisting of one category Layer only which means it is not layered at all The fact that this theory places everything already in the beginning albeit in an implicit way implying no true creativity only successive unfolding of the world process and no openess towards the future is a problematic point for it but still doesn t prove it wrong Further I doubt whether such a theory will even only in principle ever to be able to account for the enormous diversity and complexity of existing concrete beings and for the apparent huge gaps between the mentioned domains of phenomena And connected with it whether it can account for all the subtleties that we encounter in plants and animals and in conscious beings in the latter their distinction between an inner and an outer world But as we will see the theory of ontological Layers and successively appearing NOVA although enjoying the fortunate position of having to presuppose only a more or less humble beginning of the history of Being i e only inorganic matter and inorganic laws and of being open to virtually any novel developments not in its literal sense also has its problematic points If we analyse what the appearance of a categorical NOVUM actually means we will see that it necessarily entails the appearing of local indeterminacy which is by the way totally overlooked by HARTMANN When accounting for final causality as is present in the objective spirit that is to say in the goal oriented conscious actions of human beings he avoids assuming any indeterminacy Instead he assumes different types of determination in this case nexus categories each one of them characteristic for a particular category Layer But these different nexus categories are assumed to be the result of the successive and step wise over forming of causality also where higher Layers over build lower Layers and these over formings are themselves the result of or identical to successively appearing NOVA And it is these NOVA as any NOVUM does that necessarily entail local indeterminacy Although the necessary assumption of local indeterminacy within the determinative web of the World looks problematic it is not totally alien In Quantum Mechanics it is encountered And although the world of Quantum Mechanics is according to me about deficient beings the latter can sometimes be connected to macroscopic beings efficient beings as the thought experiment labelled Schrödinger s Cat demonstrates In the next Section we will discuss the above problems concerning the NOVUM more deeply The NOVUM will not be discussed with respect to its alleged content as we have done earlier with the NOVUM as it is supposed to appear in the Organic Layer but with respect to its general ontological status NOVUM or Unfolding of the preformed Ontological Status of the categorical NOVUM O n our Website especially its Fourth i e present Part See discussions in I and II of the present Series of Documents we have given much thought and attention to HARTMANN s theory of the NOVUM i e the truly new aspect thus not merely the phenomenologically new but the categorically new i e the new with respect to the very principles of observed phenomena that appears at a junction between Layers of Being or equivalently category Layers Crossing from the Inorganic to the Organic HARTMANN postulates the appearance of an organic NOVUM Nexus Organicus that is responsible for the specific and goal oriented organization of chemical processes Then at the transition from the Organic to the Psychic a psychic NOVUM is postulated that somehow is responsible for the remarkable interior world that is actually experienced when the higher organism explicitly distinguishes its Self from its not Self and which ability we call consciousness Finally at the next Layer boundary there appears again a NOVUM a super psychic NOVUM that is responsible for spiritual life Such a theory has always troubled me because I keep on asking where such a NOVUM comes from which is as such of course a silly question but serves to find out what a novum is and whether such thing is possible at all If it comes from the next lower level then it is not a NOVUM so it must if it is really a NOVUM come out of the blue And one has Darwin s theory of evolution which proposes a more or less gradual development of lower organisms into higher ones Extrapolated downwards this leads to a theory of the origin of life from the Inorganic It all seems so plausible even HARTMANN accepts the theory of evolution in the sense of organic transmutation and not in the sense of unrolling unfolding or explicating that life has originated from inorganic materials and has developed further into higher organic forms all the way up to conscious beings and even further into social and moral conscious beings After all in the beginning the Earth was not at all suited to harbor life so it must have been started from inorganic conditions And OK we must understand evolution and development not as unrolling unwrapping but as origination generation and transmutation But even then it is unclear what these terms should mean in the context of inorganic and organic history Let us see what follows when we do not accept the periodic appearance of a true NOVUM to account for life consciousness and objective spirit Obviously we will then follow the modern dynamical systems approach as it is used in research on artificial life or on chemical or computer models of the origin and further development of life Here it is supposed that complex dynamical systems exist that generate novelty from more or less simple conditions or beginnings In First Part of Website much attention is devoted to such systems cellular automata boolean networks L systems dissipative systems etc Here we have the transition from simple and less ordered initial conditions to complex and ordered patterns At first sight this is a promising avenue to the understanding of the origin and development of life But is it really going to account for all this A dynamical system in general and thus also a special system that generates complex patterns morphological or behavioral is governed by its dynamical law Such a law can be seen as something mathematical When it is physically interpreted we obtain one of the many possible initial states or conditions of the system From this initial condition the system will deterministically go its way following a particular trajectory and finally reach some end state which could be either a steady state or some cycle or it could end up onto a so called strange attractor along which it runs indefinitely without ever visiting a state in which it already has been in earlier Especially the steady state and the cycle state i e a periodic cycle could turn out to be a well organized complex pattern This complex pattern can then in theory represent a higher form with respect to the pattern representing the initial condition In this way we indeed have avoided the assumption of the appearance out of the blue of a NOVUM The higher form generated by the dynamical system IS our NOVUM but fortunately not coming out of the blue But of course such a novum is not a genuine NOVUM because it was already implicitly present in the dynamical law and initial conditions of the dynamical system And because the dynamical law of such a particular system is just a special case of the set of general natural physical laws as they really are not as we think they are the essence of life and consciousness must already be implicit in those general physical laws And when conditions are right these patterns that is to say life and consciousness will necessarily appear This is the bottom line of thought of those evolutionary biologists and philosophers who do not accept any genuine NOVUM to appear out of the blue during the Earth s history In most general terms we have to do with the so called Theory of Everything which assumes that the history of all being can be described by one single master equation which when physically intepreted in such a way that we have the appropriate initial condition unfolds itself and as such is then the history of Being Indeed the assumption of the appearance of a genuine say organic NOVUM introduces a mysterious element into biology and it is understandable that one tries to avoid such an assumption But is the theory that avoids this NOVUM i e the theory described above The theory along the lines of the dynamical systems approach or more generally the Theory of Everything less mysterious Indeed at first sight it looks far less mysterious than assuming the periodic appearance of a genuine NOVUM But is it really so The dynamical systems approach to the origin and further development transmutation of life and consciousness must assume such dynamical laws and initial conditions that deterministically lead to high level complex patterns Simple dynamical systems transmute into more and more complex dynamical systems that generate very complex and organized patterns that are eventually organic patterns and patterns of consciousness But then these organic patterns and patterns of consciousness conscious patterns must already be implicit in even those simple initial systems when we do not accept the periodic appearance of a genuine NOVUM I think that this enormous potential that is assumed to be present from the very beginning is at least as mysterious as the periodic appearance of a NOVUM It is as I think now even more mysterious This is so because when we admit the successive appearance of a genuine NOVUM at several advanced points along the trajectory of the Earth history the NOVA appear or can apply when some appropriate substrate has finally been formed by unfolding on the basis of relatively simple initial conditions that all by themselves can not lead to life and consciousness The appearance of a NOVUM is then more plausible certainly with respect to the first and last NOVUM that must be assumed at the very beginning of the history of Being by a theory of everything It only appears when a lot of preparatory work has already been done where the term preparatory can only be used after the fact that is to say no teleology is here assumed And indeed if we accept the successive appearance of a NOVUM along the trajectory of the Earth s history the basic natural laws can be assumed to be much more simple than they would have to be if all the potentials to develop life and consciousness were already implicit in those laws We then would not in our explanation have to start with basic natural laws that are unimaginably complex at least with respect to their unfolding No they are relatively simple certainly with respect to what they can unfold and in being so they create relatively simple patterns It has been found that simple systems can generate highly complex patterns But are they then simple Further the absolute degree of complexity is in fact low first in the sense that they have until now in experiments never reached the status of even say the complexity of a humble bacterium and secondly in the sense that the complexity must in fact be measured not in the system s result but in its algorithm or dynamical law In this sense a simple system can never generate a complex result This result is as simple as the system s dynamical law plus initial conditions The difference between the two is that the result is unfolded content of the law initial conditions And again in the context of the successive appearance of a NOVUM the initial dynamical system can moreover i e by an additional reason be itself relatively simple right from the start it doesn t need to be initially complex namely when we assume with HARTMANN that a lower Layer is indifferent as to the content and existence of any higher Layer Surely a Layer of Being is a category Layer and that means that it is a coherent set of categories which in turn means that such a Layer cannot really exist However what we in the present discussion actually mean by the existence of a Layer is the existence of all relevant sufficient grounds which are the If components of the If Then constants implying that the categories now apply So no preformation of the living and of the conscious has to be assumed when accepting HARTMANN s theory of NOVA An initial relatively simple system runs its course and from time to time a NOVUM appears If the substrate already developed so far turns out to be appropriate the NOVUM materializes and a new Layer is formed Let us investigate what all this means and not means The NOVUM needs not in this theory be supposed to be existent beforehand It must however pre exist as an If Then constant where the NOVUM is the Then component If circumstances happen to be right the If Then constant applies and the NOVUM appears In this framework the categorical NOVUM i e the NOVUM as a category is the If Then constant while the Then component all by itself is its concretum But this assessment of the categorical NOVUM as an If Then constant is not entirely consistent We have said that an initial general dynamical law which contains the whole potential to develop life and consciousness and in which there is no increase in complexity reckoned from the physically interpreted dynamical law initial condition of system to the final pattern that is generated is not so likely Such a dynamical law as any dynamical law is a category and more specifically a nexus category But if we in accepting the appearance of a categorical NOVUM see this NOVUM as an If Then constant then we have in our theory returned to a category which is in the present context certainly a dynamical law that is in some way present and creates phenomenal complexity all the way up to life and consciousness where these phenomena just unfold from the law and consequently are not something really new But we d just said that this was not so likely So we are forced to categorically interpret the NOVUM in a different way a way which clearly shows its newness novelty and does not disguise it in the form of a potential as part of a dynamical law The categorical NOVUM should to comply with all this have the following general structure If X then non N where X is a disjunctive set of conditions and N is the genuine novelty It is further assumed that there can be no sufficient ground for N to appear In itself N could be the concretum of the category If X then N But this category does not hold because X will result in non N The specific nature of the If X then non N category must be such that it all by itself i e without a cause sometimes happens to be If X then N and then indeed when X is the case genuine novelty will follow Here we see in a dramatic way what it entails when we assume the appearance of a genuine NOVUM somewhere along the trajectory of the history of Being The determinative fabric of the World is i n t e r r u p t e d here and there and from time to time that is to say we are forced to admit that there is some play for freedom i e that there is local indeterminism at least at the categorical level And this is unavoidable when we accept the NOVUM HARTMANN Der Aufbau der realen Welt 1940 pp 557 says that every Layer of Being has its own type of concretum concretum determination its own type of nexus Such a concretum concretum determination is itself the concretum of a nexus category or said differently the general and specific nature of some concretum concretum determination such as causality general or the law of planetary motion special stands under i e is determined by a general or special nexus category In the Organic Layer it is the Nexus Organicus taken generally that determines as a category the organic concretum concretum determination and which is an over forming of causality where every special Nexus Organicus specific for some particular organic species is an over forming of some special inorganic dynamical law But this over forming takes place through the action of the organic NOVUM which here is the nexus organicus itself while many other organic features are over formings for which the nexus organicus is responsible In the Super psychic Layer the specific type but general for the Layer of concretum concretum determination is itself determined by the nexus finalis final nexus final causality which is the result of over forming of the psychic nexus which is itself an over forming of the nexus organicus in the context of an over building of the Psychic Layer by the Super psychic Layer and where the Psychic Layer is an over building of the Organic Layer Recall that also there where layers overbuild some categories nevertheless reappear resulting in over formed categories Also in this case we have to do with the appearance of a NOVUM and thus with a local indeterminacy This latter is not seen by HARTMANN A local indeterminacy as found in the discussed cases means a partial indeterminism of the World as a whole Where and when local indeterminacy is the case it is total indeterminism It is an interruption of the World s determinacy which latter consists of several types of determination With all this we are however not done yet with the analysis of a categorical NOVUM It is necessary to delve deeper into the general nature of it in order to fully understand what exactly is meant when we claim that somewhere up along the line of history i e the history of being something NEW appears Let us for the time being limit our discussion to the phenomenon of Life meaning that N as we saw it in the above formulas is the concretum of the nexus organicus which itself is the organic nexus category or If Then constant If we assume the constant If X then non N changing to If X then N somewhere along the mentioned line of history then from that moment on organic novelty will appear wherever the condition X happens to be met But from then onwards we have a world possessing the complete potential to give rise to life The new constant If X then N applies everywhere i e its range of relevance is the whole World and wherever X is satisfied N will follow But then from the viewpoint of theory applying Occam s razor we can happily ignore the initial part of the described history of Being that part namely where we not yet had the If X then N constant And then we are wholly back to the assumption of the Theory of Everything or its special biological version in which it was claimed that all potentials are already present right from the beginning It is clear that this is not the way to characterize the organic NOVUM but rather to deny it To remedy this we could assume that the category randomly oscillates between If X then non N and If X then N But this is not consistent with the actual long term presence of stable life on Earth In order to reach a solution for this dilemma some facts are very important We know that apart from the events that took place at the origin of life life is only generated from life If life is lost by the death of some individuals it is not replenished from the inorganic world but from the organic Once got off the ground life has to sustain itself and indeed it does And it seems that when it is totally wiped out it will not automatically appear once again In the light of all this we re beginning to understand that a categorical NOVUM as such i e as something really NEW implies by its very nature some aberrant properties It implies them however precisely because it is new These aberrant properties are not as such assumed by some ad hoc theory but unfold the precise meaning of new The categorical organic NOVUM can now be described in terms of If Then constants as follows Generally the constant If X then non N applies Further we assume that there cannot exist conditions that necessarily lead to N where N is the concrete organic nexus and where the nexus category If X then N is the Nexus Organicus if there were such a category The presence of inorganic matter and energy is a general conditio sine qua non for life to appear at all meaning that without inorganic matter there can be no life but at the same time meaning that it is not a sufficient condition Indeed we know that the Inorganic will not totally from itself give rise to life It is indifferent to it The condition X in the above constant is the special conditio sine qua non for life to appear namely the maximally attainable conditio sine qua non It is that specific state of matter in its physical meaning that is absolutely necessary for life to appear But also this special conditio sine qua non is still not a sufficient condition for life to appear Thus when X is satisfied N will not appear because If X then non N applies But we now assume that this particular category can fluctuate it can for no reason at all suddenly switch to its opposite If X then N If this occurs then everywhere in the Universe where X happens to be satisfied representing the existence of physical matter in a particular state of complexity life will emerge in its primordial stages And from then on the constant If X then N will continue to apply but only within the organic machinery Outside the latter it will not apply i e there the original constant If X then non N will apply Said differently After biological life has originated as a result of 1 the fortuitous transformation of the constant If X then non N into the constant If X then N and 2 the condition X being locally satisfied the new constant If X then N continues to apply but its range of relevance is limited to the actually existing organic domain organic machinery only The original constant If X then non N also applies but only outside the actually existing organic domain Thereby it is possible that this category fluctuates again And then again wherever in the in organic the condition X happens to be satisfied in the organisms it already is life will again originate from an in organic substrate and now also in these new organisms the constant If X then N keeps applying while it does not apply in the inorganic domain We have said that when the category If X then N now taken generally flashes up from the original category the concrete NOVUM N will appear wherever the condition X is satisfied For the organic NOVUM X is a certain complex state of in organic matter and energy i e a certain chemical configuration For the psychic NOVUM X is a certain complex organic state of matter i e chemically complex matter forming patterns whose succession from one pattern to another is determined by a particular very complex and appropriate version that is to say a highly evolved version of the Nexus Organicus For the super psychic NOVUM X is an association of a great many related organic individuals each possessing a highly evolved form of consciousness geared to function within the association So in this latter case organic individuals with highly evolved consciousness are already presupposed in the categories If X then non N and If X then N where N is the concrete super psychic NOVUM And of course organic conscious beings themselves in turn presuppose the psychic and organic NOVA But this looks like we have returned to the Theory of Everything where all higher structures are already implicitly and thus at least as categories or principles present from the very beginning We can solve this problem by emphasizing that for all If Then constants the If component which was X in the above cases and also the Then component is not presupposed Only when the If component is actually materialized somewhere the Then component follows necessarily meaning that only then the category is present i e the category is only then present when it actually applies In all other cases the category is not there and so its components are not presupposed This is in accordance with HARTMANN s position that a category does not exist in the absence of its corresponding concretum So when we have our If X then N constant this constant or category and with it the condition X is in no way present as long as the category is not actually applying So X is not pre supposed but only present when the category actually applies and the category does not apply until X is present And in this categorically peculiar case i e the case of a spurious category as we might say the category continues to apply only within the concretum that was initially determined by it The spurious category is in the present case the constant If X then N resulting from the spontaneous switching of the category If X then non N to its opposite And because all categories are only present when they actually apply their range of relevance can only be assessed by us after the fact If in an organism the condition X is no longer satisfied anymore it will die and return to the inorganic From there it cannot be retrieved because the inorganic is the range of relevance for the constant If X then non N As has been said life must replenish from its own stock and it indeed does The characterization i e status not content of the categorical NOVUM that appears in the Psychic Layer and also of that which appears in the Super psychic Layer can be done along the same lines as we did it for the organic NOVUM All in all we have to admit several types of nexus categories characterizing each Layer causal nexus Inorganic Layer organic nexus Organic Layer psychic nexus Psychic Layer and final nexus i e final causality or equivalently super psychic nexus Super psychic Layer The latter three are successive over formings of causality and represent the appearing NOVA The concretum of these nexus categories are themselves concretum concretum determinations Further we have the other category concretum determinations i e those apart from the nexus categories and finally the Categorical Laws which are category category determinations and themselves categories And this overall determinative fabric is from place to place and from time to time somehow interrupted by spots of total indeterminacy At the transition from Inorganic to Organic this happens when the causal nexus of the Inorganic is over formed in virtue of the advent of a totally new element resulting in the Nexus Organicus This over forming is described by HARTMANN in several places in his work among others in his Der Aufbau der realen Welt 1940 p 560 The lower levels or Layers are not over determined in fact it is the concreta of these Layers which are category Layers that are not over determined Their processes give varying results varying because of for example interfering noise that is present everywhere The over formed nexus i e the nexus resulting from over forming overcomes this interfering noise and so leads to a definite specific and repeatable result This repeatability is the new nexus law In physical processes for instance crystallization there may be with respect to the crystallization of a particular substance under the same thermodynamic conditions temperature and pressure several different possible outcomes which all have however the same lowest energy So in such a case there are several equally possible outcomes several equally possible crystal structures Observation of crystallization however shows that always only one particular lowest energy structure materializes While it could be that here there is in all these cases in fact only one lowest energy configuration of atoms after all meaning that all other configurations have higher energy we could expect that this is not so in many complicated biochemical reaction networks as they occur in organisms Several different results are then energetically equally possible but in the organism i e in an organic context or environment only one is chosen If this is so we have here an instance of some directive agent that is implied by the organic NOVUM All this is in its generality acknowledged by HARTMANN but he has overlooked the fact that the introduction of the categorical NOVUM in the theories necessarily entails a local in determinism while HARTMANN denies any indeterminism local or univeral Maybe in the domain of deficient beings with which quantum mechanics deals such macroscopic local indeterminism as we have just described is already anticipated Turning now to the alternative theory that rejects the assumption of any NOVUM we can ask ourselves what is worse the assumption of localized indeterminism or the assumption that all potentialities to create life consciousness and beyond are already present in the basic physical laws of in organic Nature The drawback of the latter assumption is that there is then in fact not a genuine openess of Nature towards the future i e towards future developments It is true that life consciousness etc will appear only when certain initial conditions are actually satisfied So there is a certain unpredictability in all this but only for us as not knowers not for Nature But all potentialities are initially present as assumed by the alternative theory and when all of these are materialized there can be no further development into still higher forms i e when the whole stock of laws or categories initially present is totally unfolded no genuine higher development can take place anymore This is a serious drawback because the theory suggests that somehow some master creator has led it all down beforehand albeit in an enfolded state genuine creativity cannot take place from this initial state upwards Moreover and this is still worse in spite of all the efforts to come up with a good theory of the origin of life it has failed until now See for instance SHAPIRO R 1986 O r i g i n s A Skeptic s Guide to the Creation of Life on Earth let alone some good theory about the advent of consciousness Of course such failures do not prove the necessity of assuming a successive appearance of a genuine NOVUM but they do point in this direction We cannot deny the sharp boundaries between inorganic organic psychic and super psychic phenomena and maybe they point to sharp boundaries between the corresponding principles categories of the inorganic organic psychic and super psychic but maybe not The idea that all potentials to develop life consciousness and beyond are already present right from the beginning namely in the form of some general dynamical law general nexus category which applies to the Universe as a whole meaning that all perturbations of the system intrinsically belong to this system because it has no outside and which idea is summarized in the form of the general formula of the Theory of Everything this idea or theory can be compared with the Theory of a successively appearing Categorical NOVUM in still another way The in a certain way physically interpreted general formula or dynamical law is the supposed starting condition of the Universe It does not need the appearance of NOVA in order to finally generate life consciousness etc However it is evident that this general formula is itself a NOVUM indeed a hyper NOVUM It is so with respect to initial nothingness So the Theory of successively appearing NOVA is more humble than the Theory of Everything because it assumes much simpler beginnings i e it assumes a much more moderate NOVUM to begin with viz a much more moderate NOVUM with respect to initial nothingness And in addition it allows creativity to take place and allows openess with regard to future developments in or of the Universe To investigate how say organisms have actually originated and evolved totally from simple beginnings by virtue of the known laws of physics and biology that is to say along the lines of the Theory of Everything where the laws of biology are supposed to be derived from those of physics leads into a reductionist nightmare as is vividly described by Ian STEWART and Jack COHEN in their Figments of Reality 1997 and also in a previous book of them The reductionistic approach gets bogged down into a tangled mess of utter complexity And although this nightmare is only epistemologically methodologically meant the longer such serious difficulties continue to persist the more support there is for a non reductionistic theory i e a theory that denies that all phenomena can in theorizing be reduced to a simple beginning allegedly legitimized by the reductionistic theory because it claims that from this simple beginning all subsequent phenomena have developed simply by unfolding And the best non reductionistic theory I came across is that of Nicolai HARTMANN in his Der Aufbau der realen Welt 1940 So let us in accepting HARTMANN s theory not be troubled too much by the assumption of local indeterminism within an otherwise deterministic World It is true that not all mysteries have now been solved but we

Original URL path: http://www.metafysica.nl/ontology/general_ontology_29m3.html (2016-02-01)

Open archived version from archive - General Ontology XXIXm4

only reaches the nearest and most apprehensible conditions Also science does not comprehend the whole span of the collocations it rather simplifies the given case and finally comprehends only the general aspect that is contained in it But the causal nexus is a category of objective reality a category of the objectively existing world and as such not dependent upon understanding The actually existing determinative coherence of the process states exists independently of knowing it From this position one can go still one step further Modal analysis HARTMANN Möglichkeit und Wirklichkeit 1938 has shown that all that is regular is just because of that necessary This law the Law of Necessity concerning the real world says that the same complex of conditions that makes something possible when it is in its completeness present also makes that something necessary Or differently expressed When all conditions of the possibility of something are present that something must appear This not failing to occur not failing to be realized is the necessity as it holds sway in the real world When one applies this law and connects it with the removal of all partial possibilities but the one that becomes real one can give a still shorter but very precise modal definition of causality Causality is that particular form of determination of the real process according to which all that becomes possible in that process appears necessarily This necessity as part of the essence of the causal nexus is that which connects the total cause with its total effect and partial cause with partial effect For in the coherence of the consecutive collocations the real necessity means just this that something different from that what actually follows cannot follow The meaning of causal necessity Limits of the inescapability To the nature of this necessity also belongs its negative side i e its intrinsic limitation The fact that the effect with all its partial elements is rigorously produced from the cause only means a dependence upon the whole set of partial elements of the cause and not a fixation towards a specific result A body or agent which could realize such a fixation is absent in the causal nexus It is in all its necessity of the effect nevertheless indifferent as to what it itself produces This is important with respect to the higher forms of determination that raise themselves above it It is the other side of the ability of causality to let itself be over formed Surely no power in the world can rule out any one of the causal components partial causes causal elements but there can be powers that add new components And because the causal complexes in the real world collocations are not closed systems but take in without resistence every component that comes their way the qualitative direction of the event can certainly be turned In that case simply something different is produced from the altered total cause and it does so with the same real necessity The intervention of supra causal factors in the process thus does not eliminate the causal necessity but certainly can direct the process i e circumvent or turn away from the original inescapability and produce that what could never be produced by the undisturbed causal process If there were in the World only physical processes then this qualification would be superfluous The causal necessity would then apply without restriction and make up a general causal determinism There exist however higher ontological Layers and each one of them has its special higher form of determination And for these higher forms of determination the Law of Categorical Freedom applies namely that they do not it is true rule out the lower determination but do autonomously apply over and above it Therefore these higher forms of determination can add new determinants to the lower determination After all they represent a categorical NOVUM with its own real world conditions of possibility and their own real world necessity HARTMANN 1950 p 344 And this changes things substantially For the causal nexus is neutral it lets itself be forced to accept any determining component or factor as long as such a factor does not eliminate its stock Therefore the causal nexus can be over formed and is within certain limits controllable By virtue of a given causal complex not influenced by any intervention that what marches in i e approaches from the future is indeed unavoidable But when a determining agent of higher order is present and when it succeeds to intervene a new real world factor is added and then that which was originally approaching can be turned away from Seen in this light the causal nexus together with its necessity is a very harmless type of determination And this accords well with the fact that it is the most elementary and simple type of determination and as it were only a minimum of determination The total determination issuing from a complete complex of conditions that is to say a total cause in the strict sense is only present in the context of the whole world process taken in its full and complete extendedness In every slice transversally to Time are then also included all higher order determinants insofar as these are present Purely causal determination that is to say without intervention of higher order determination only rules in physical i e inorganic processes In organic processes it is already slightly over formed while in the sphere of human acting experimenting and willing there is only a mixed determination One can therefore also say The openess apparent incompleteness of the complexes of determinants in the undisturbed natural processes surely does exist untill entering the NOW but it does not exist for the causal natural process itself but only for real higher order powers This openess therefore certainly exists for human acting For it only exists with respect to the possible setting in of supra causal determinants And strictly speaking also the plurality of possibilities in each phase of the causal process only exists with respect to this setting in In itself no causal process state is plurivocal From every state as a complete collocation of factors there is only one possibility with respect to the next state In itself causal determination is a complete determination Only outwards or better upwards the causal determination is not closed Partial causes The affirmative nature of causality What we in life know as causes are always only partial causes Among them there can well be the most essential elements of the total cause and sufficient for a short term causal assessment of the unfolding event For long term assessment they are insufficient What will do for more or less nearby goals is already incomplete for science Ontologically there are no such things as partial causes In fact there are neither isolated causes nor isolated causal series There only exists an interweaving of causal series and this is very complex and obscure to the knowing mind Ultimately there is only the one all embracing causal real world process at least with respect to the natural event In this one world process at any given moment everything is co dependent i e everything is connected with everything else also the apparently far away and seemingly disconnected Cause in its strict sense always is exclusively the total cause rooted in the whole of the world process HARTMANN 1950 p 350 The total cause in turn is not simply the sum of partial causes although it is composed by them This composition itself namely is more than a sum It is an interweaving of elements a system albeit a fleeting one a totality in which the partial elements are already connected to each other in peculiar way This connectedness is a simultaneous one and is the result of the fact that the elements have together originated in the one all embracing real world coherence they bring along the specific nature of their connection from their causal origination But they are always differently connected than are the corresponding elements of the total effect For the originating tranverse connections also belong to the qualitative determinedness of the effect and are as its parial elements in no way contained in the cause Also they are in the overall process time and again differently constituted Only the overall cause total cause is a truly productive cause causa efficiens at all not the partial cause We however never know the total cause because of its extensiveness In practice we automatically only consider partial causes That s why our causal knowledge is limited As a result of our incomplete causal knowledge the partial causes seem to come in two types namely positive and negative ones When an expected effect fails to materialize the we speak of failing conditions i e conditions that are apparently absent and we get the impression that the non existent conditions just like the actually existent conditions contribute to the total effect This is a typical human way to see things It comes from partial causes that are isolated by us and made independent it tries to construct from them an image of the total cause But in the real world causal connection a different rule applies All conditions that work together in a total cause are wholly affirmative They are thoroughly effective factors of corresponding elements of the total effect There are no negative causal factors Or in older language there is no modus deficiens among the components of the causa efficiens There also are no actually failing causal elements Surely there are such causal factors that prevent something But this act of preventing is not the negation of something that was already foreshadowed There is nothing foreshadowed or predetermined in the causal fabric There is only that which just actually materializes namely that what is realized in the collective effect of all causal factors Yes it is already wrong to say that something determined does not come about does not come into being because when it does not come into being it is not determined For in the causal connection only that will attain determinateness what is actually effected and not what is not effected Individuality of the causal nexus If causality were nothing else than a law or rule then its nature would just be something general and only something general But neither its series as such nor its continuum or microscopic discretum and neither its dynamic and its productive activity is general The causal nexus is rather thoroughly individual It should already be such because it is a real word relationship connecting real causes with real effects And all real world phenomena that are ontologically independent i e not for instance properties as they are in themselves are one off and do not repeat Realizing that the causal nexus consists of the dynamic of production it is evident that its individuality implies that its production itself is different from case to case And this accords with the fact that it continually produces novelty This does of course not exclude that lawfulness is contained in the causal process Causality is presupposed by processual lawfulness and it is clear that thus causality is something different from lawfulness As the process in general is not identical to the law according to which it proceeds so also the causal process And as the former consists in one off courses so also the latter consists in one off effecting although both do not lack lawfulness Generality and individuality do not go separately side by side in the one World Rather one and the same real world entity or being is at the same time general and individual namely such that all its single traits reappear in innumerable other cases and thus are general while their combination as a whole and within a single being is one off not reappearing in exactly the same form A different individuality does not exist But also a different kind of generality does not exist in the real world HARTMANN 1950 p 357 The nature of the categorical layering of the real World The causal nexus does not have limits in a negative sense But there is limitation in a positive sense It is there where higher order determination sets in This does not apply with respect to whatever different kinds of determination Already in the range of physical Being such other kinds of determinations pop up for instance natural law interaction and others These do not limit the causal nexus they even do not over form it They just complete the real world determination of the same processes and objects from another direction They thus simply connect with the causal nexus without changing its essentials Said differently natural law and interaction are categorically homogeneous with the causal nexus But things are different with respect to higher order forms of determination And we mean here forms of determination that are the result of the over forming of the causal nexus The over forming of the causal nexus occurs at three points along the sequence of Layers namely 1 at the transition from the Inorganic which is the ontological locus of origin of the causal nexus to the Organic where the Layers also relate to each other as the higher over forming the lower 2 at the transition from the Organic to the Psychic where the Layers relate to each other as the higher over building the lower and finally 3 at the transition from the Psychic to the Super psychic where the Layers also relate to each other as the higher over building the lower Because the Layers are by definition categorically and therefore ontologically different from each other that is to say they are heterogeneous with respect to each other one could think that the causal nexus could not cross the boundary between Layers And this would imply that it cannot and does not apply in the psychic domain and that in the same way psychic determination does not apply in the inorganic domain The latter determination even would not apply already in the organic domain for the same reason From this the theory of psycho physical parallelity emerged i e the theory that physical events in the brain do not causally relate to psychic events but that they proceed parallel to each other Of course such a theory is connected with the separation of body and mind HARTMANN 1950 p 359 opposes such a theory by denying that causality cannot cross the Layer boundaries He however does this in a way which to my mind is a bit inaccurate and needs some rethinking And because this rethinking turns out to be very instructive as regards to a proper understanding of the ontological layering of the real World we will paraphrase HARTMANN on this point and add our own views on the matter which I have done implicitly all along but I until the present case seldom disagreed with HARTMANN Is HARTMANN 1950 p 361 the causal nexus able to cross layer boundaries and especially all the way up from the Inorganic via the Organic to the Psycic Can it issuing from one Layer be effective in another While writing about all this p 359 371 HARTMANN should be more careful in formulating things The Layers as such cannot be c a u s a l l y related to each other i e one Layer cannot c a u s e another because Layers are c a t e g o r y layers and categories do not cause other categories neither are they caused by other categories Causation is not a category category determination but a concretum concretum determination Therefore only real entities not principles can cause each other or be caused by one another On the other hand the category of causation originating in the Inorganic Layer reappears in all subsequent higher Layers but as we know it does so in a modified way which here means that it is over formed each time it crosses a Layer boundary But as indeed HARTMANN himself has stressed reappearing of causality alone does not imply that the causal nexus can be heterogeneous for instance spatial cause non spatial effect but only that a transition from cause to effect can take place in every real world Layer i e it can occur at least within one and the same Layer cause in this Layer effect in this same Layer while this applies to all real world Layers Whether the connection of cause and effect can cross a Layer boundary while going from cause to effect or whether a cause itself or an effect itself can be ontologically heterogeneous for example containing spatial elements together with non spatial elements or equivalently containing elements from the Inorganic together with elements from the Psychic can only be decided on empirical grounds Can See HARTMANN Ibid p 361 it i e the causal nexus also do so i e be heterogeneous in the case where the relationship between the Layers is that of over building instead of over forming as it is between the Organic and the Psychic where consequently several essential categories of the lower Layer are halted and the system of those categories does not as matter enter the higher Such questions cannot be answered by just saying that the causal nexus reappears in the higher Layer because in the higher Layer there could be a closed causal coherence of its concreta while at the same time this causal coherence does not intervene with elements in the lower Layer or vice versa where the lower Layer has its own closed causal coherence of its concreta and moreover qualitatively different from that of the higher So the fact of causality being present in both Layers albeit in the form of different causal coherences where as regards content and only as regards content the higher one can be seen as a modification of the lower one does not decide whether there can be or cannot be c a u s a l connections between those Layers In the higher Layer the causal nexus is already over formed and perhaps it applies there only in its over formed state About causal connection between Layers see remark above While according to HARTMANN the Psychic Layer over b u i l d s the Organic Layer the psychic causal nexus as a single category only over f o r m s the organic causal nexus also according to HARTMANN This is not without problems The original causal nexus stems from the Inorganic Layer There it is s p a t i a l When it is over formed at the transition to the Organic it is still spatial because the category of Space is not halted at the transition from the Inorganic to the Organic In the Psychic it is said by HARTMANN to be once more over formed But here the category of Space is halted which then implies that psychic causality is not spatial anymore Only causality as such i e a temporal succession of states as a result of states d e t e r m i n i n g next states survives the transition from the Organic to the Psychic together with the potential organization of causal factors i e controlled intervention of extra factors as it is already present in organic causality The spatial aspect is lost Can this psychic causality really then be said to be the result of just over forming or should one hold that it is rather the result of over building Because there is still something surviving it is probably best expressed by its being over formed Anyway these two causal nexus plural are very different indeed It could HARTMANN Ibid p 361 then be theorized that each Layer of Being has its total process and total determination all for itself while it does not enter in any immediate causal relationship with other Layers This is especially significant for the lower Layer lower with respect to a higher Layer because the higher is in any case carried by the lower And insofar it is dependent upon the lower anyway In fact it is according to HARTMANN not possible to extend the causal isolation of the Layers this far And indeed hardly anybody claims the causal independence of the Organic from the Inorganic or the independence of the events of human history of geographic and climatic causes Here the intervention of clear cut physical factors together with their causality is evident One can only dispute the degree of their significance As regards to the relationship between the Psychic and the Organic on the other hand such causal independence is often claimed This is understandable in virtue of the peculiar nature of the psychic as compared with the organic The collective concretum of the Psychic Category Layer is consciousness with its typical internal world for which there is no parallel in any other category Layer What is often considered to be impossible is the heterogeneous causal series among whose members are physical as well as psychic members The right point is that indeed there remains something not understood in this The limit of spatiality separates the psychic from the physical It was then asked How can a causal process for instance that of observation start within the spatial while proceeding further within the non spatial In such a case the cause has to be spatial material the effect however non spatial and immaterial What can be brought up against this First this It is not necessary that in their proper domain physical causes remain without an effect while that effect appears instead in the psychic The dynamic spatial causes rather can have their dynamic spatial effects and likewise the non spatial causes in the psychic events their non spatial effects This according to HARTMANN does not exclude that both of them can also have heterogenous effects i e when we have a spatial cause its effect could include both spatial and non spatial elements or in other words the range of a possible effect of a spatial cause can extend across the spatial and the non spatial And also the range of a possible effect of a non spatial cause can extent not only across the non spatial but also and at the same time across the spatial that is to say the effect can comprise spatial and non spatial elements i e spatial together with non spatial This seems fairly obvious when seen from the Law saying that all real world factors and here thus psychic as well as physical of the given collocation participate in producing the total effect For a human being together with his or her psychic life is situated right in the middle of the real world coherence that is to say he and his internal life belongs to it So it should be clear that in such a human being who is a categorically layered being physical and psychic components are connected with each other every time in every internal collocation I would like to add In the category Layers these components are separated physical components belong to the Inorganic Layer psychic components to the Psychic Layer while they are not separated in a concrete human being there they are integrated So in a concrete human being at least inorganic physical organic and psychic causes are present but belonging to different category Layers So while we must admit such a close connectedness of heterogeneous Layers of Being category Layers as that of the bodily and psychic in a single human being it is a priori not credible that the respective processes and process systems would not also causally influence one another Higher order concretum concretum determinations presuppose causality i e bare causality not necessarily spatial causality The relevant psychic processes first of all the processes of perception are not just like that produced by physical causes objects that are being perceived and the physiological machinery of perception certainly not by them alone And as to the role of perceived physical objects the ensuing psychic processes just are elicited from them In all this a whole specific apparatus participates including the mentioned psychic processes brought along by consciousness Perception thus has its widely extended causal complex also in the psychic domain itself and the stimulus coming from without and which is processed by physiological processes in sense organ and brain is only a partial cause Without the internal psychic causal system the object to be perceived does not elicit such that it leads to an internal image of that object The proper nature of the effecting itself does not need to be a physical one It even cannot be physical because the last stage of the effecting act is non spatial and immaterial The decisive consideration as regards the psycho physical problem is in addition to what was said above the following The heterogeneity of the causal series is far from occurring only at the Layer boundaries Also within the physical causal sequences the effect of the cause is not thoroughly homogeneous all along It is there only categorically so But this does not exclude other heterogeneities i e it can within the physical Layer be heterogeneous in a different way different from categorical heterogeneity Causal series are in contrast to for instance mathematical series in themselves heterogeneous series anyway From the mere consecutive character of causality one can never determine in advance the special content of the ensuing effect One can do so only on the basis of extensive knowledge of the relevant laws And also this only within certain limits Knowledge of laws however requires an extensive observational experience In the mean time the act of effecting itself the production as such remains un understood It is presupposed Thus given the fact that all cases of effecting involve one or another form of heterogeneity why then couldn t the disparateness of cause and effect go one step further namely beyond the limit of categorical similarity The essence of causal production i e the transition from cause to effect does insofar as we know it now not restrict this And further Given that the secret of causal production remains unfathomable what will it then mean that we cannot comprehend how something physical affects something psychic It is the same incomprehensibility here as within the physical and within the psychic It is more so only because of the categorical disparateness Basically we do not understand the regular physical causal nexus any better Only our being accustomed to it suggests its comprehensibility And there nobody feels uneasy with it And rightly so Incomprehensibility does not neutralize the existence of the incomprehensible Limits of comprehension are not limits of being Evidently the degree of heterogeneity of cause and effect does not make a difference Moreover how much or how little we understand causation does not alter anything in the act itself of effecting where and when it takes place The psycho physical relationship being itself a causal relationship as in the a ffecting of the psychic by a physical cause for instance in perception does not therefore need to be a purely causal one Other forms of determination must be involved For here we do not have to do with just the simple linear causality but with already organically over formed causality The physiological processes as functional processes are themselves not merely physical processes anymore The form of determination of the specifically organic kind is it is true not yet clear to us an attempt to such an understanding of the nexus organicus was made earlier but that it is some over forming of causality of causality as we see it in the physical domain is beyond doubt So much we already know that in the sense organ total effects of organic systems are involved insofar as these are geared to a very specific reactivity Only in this way it is possible that minimal stimuli can elicit very significant and disparate psychic effects Here already from the beginning there was an error in the considerations concerning the psycho physical relationship It was wrong to assume that causality itself represents the whole problem It was wrong to consider the categorical relationship directly between inorganic causality and a psychic determination One should realize that something lies between these namely the organic form of the nexus One neglected the latter because one didn t know about it If one restores the natural relationship in the sense of a stacking of Layers of Being then all fear for a mechanizing of the psychic is unnecessary Already the organic process which is the proper counterpart of the psychic event does not exhaust itself in mechanical causal determinateness For the psycho physical relationship it is entirely sufficient that only certain causal elements or aspects are brought along into the higher Layers Whether they are brought along i e reappear directly or indirectly does not make a difference Over and above that the most far reaching intrinsic determination within the psychic can exist The next Figure illustrates diagrammatically the over forming of causality We restrict ourselves here to the Inorganic Organic Psychic Layer boundaries Figure above Successive over foming of the category of Causality causal nexus The Layer of origin of Causality red is the Inorganic Layer There it appears in a spatial way dark blue because the category of Space co determines all inorganic entities In the Organic Layer it is organically over formed light blue Because the category of Space is retained in the Organic Layer the spatial aspect dark blue as it was inherent in inorganic Causality reappears At the boundary between the Organic and the Psychic Layer the category of Space is halted that is to say it no longer appears or reappears in the Psychic Layer Therefore causality red reappears without spatial aspect As such it becomes psychically over formed yellow So we see that causality as such i e causality without qualification reappears in all Layers and also in the Super psychic Layer not drawn The just given over forming of the causal nexus directly implies its reappearance and therefore the possibility of heterogeneous causality i e the possibility that the transition from cause to effect not only can take place between two concreta cause and effect within one and the same Layer but also between concreta of different Layers And that indeed the above reappearance takes place can be based on the following consideration We can ask What degree of heterogeneity would be sufficient to block the nexus connection from cause to effect Clearly only such a degree of heterogeneity that exceeds the intrinsic categorical conditions of the causal relationship itself In that case the nexus would be broken Which are then the intrinsic categorical conditions of the category of Causality Apart from the all pervading Fundamental Categories reappearing in all Layers including the Mathematical these categorical conditions boil down to the categories of Time and Process And these apply to all real World Layers from the Inorganic all the way up into the Super psychic processes are present in all these Layers that is to say all concreta of whatever real world Layer have a processual nature and because Process presupposes Time all these concreta are temporal One of the Fundamental Categories and thus present in all Layers is that of Determination in its most general form If we then have a process in which its successive states are not just successive but determine each other in the sense that each state is determined by the previous state and determines the next then we have Causality And from this it is clear that the category of Causality reappears under modification in every real world category Layer The category of Space does not belong to the categorical conditions of Causality because the above mentioned conditions are already sufficient The states of a process with successive determination do not need to be spatial as we see it in psychical states that determine each other in a succession in Time but are not spatial We can now summarize briefly our findings about the ontological heterogeneity in causality as were partly based on and inspired by those of HARTMANN Philosophie der Natur 1950 1980 The causal complex total cause as well as the effect complex total effect can be ontologically heterogeneous If we concentrate on the psycho physical problem and the presence or absence of spatiality in causality then this means that there exist the following possibilities Causal Complex total cause Effect Complex total effect Spatial Spatial Non Spatial Non Spatial Spatial and Non spatial Spatial Spatial and Non spatial Non Spatial Spatial and Non spatial Spatial and Non Spatial Spatial Spatial and Non Spatial Non spatial Spatial and Non Spatial This heterogeneity is in itself unintelligible but this is not surprising because the transition from cause to effect is unintelligible anyhow Unity of the real world coherence as regards causality As the unity of a temporal coherence and of a processual coherence pervades the whole real World so also the unity of a causal coherence HARTMANN 1950 p 370 For this causal coherence the Layer boundaries representing categorical heterogeneity are as we have seen no barriers It goes in diverse forms to and fro A falling stone terminates a spiritual life a natural disaster destroys a historically evolved culture or makes place for a new one An idea in the head of some human being transforms a country The over building relationships in this respect differ only little from relationships of over forming for always some lower categories pass over into the higher Layers and among these is Causality As far as temporality extends across the real world coherence and it encompasses it in its totality also the causal coherence does not meet any borders Surely all the time it encounters higher order determinations but it does not conflict with them integrates with them in the sense of subordination lets itself be over formed The dualistic theories which claim an absolute antithesis to exist between ontological domains as Descartes doctrine of two substances extensio and cogitatio body and mind have failed to understand this Their error was rooted in the total absence of a categorical analysis in failing to acknowledge the stratigraphical nature of Being and in the virtual absence of all enquiry into the forms of dependency obtaining between Layers and also between their concreta As soon as one drops this false metaphysical presupposition then also the whole bunch of self made difficulties in the psycho physical problem vanishes A psycho physical causality then is not such an enigma anymore At least the enigma is of the same order of magnitude as that of the causal production i e the creation of the effect from the cause at all It is only a qualitatively more complex enigma and a more obtrusive one Some notes on the objectivity of Causality HARTMANN 1950 pp 371 It could be imagined that causality is just a subjective way to see things a device to explain events and that in reality causality does not exist or at least not in the way as defined above It could also be that it apparently does not apply in some special cases or in some particular domain while it does apply in others i e in some area belonging to a given Layer of being say the Inorganic it seems not to be valid as in the quantum mechanical domain while in many other areas of that same Layer of Being it is valid which might perhaps be an indication of its subjectivity Causality namely could well be just an evolutionarily developed tool for coping with everyday reality and survival and nothing more It works because everyday experience goes about its business only very roughly such that the errors connected with the causal interpretation of the World

Original URL path: http://www.metafysica.nl/ontology/general_ontology_29m4.html (2016-02-01)

Open archived version from archive - General Ontology XXIXm5

perhaps not so evident at the global scale i e at the scale of the Universe as a whole If we accept the latter i e if we indeed hold that the Universe as a whole does not necessarily tend to maximally relax when it is locally stressed but that for it just a little more than cancelling out the locally originated stress is sufficient to allow for the effect to take place and if we moreover realize that entropy increase as such can be accomplished in many different ways then entropy increase is only a conditio sine qua non an indispensable but not necessarily sufficient condition for causality to take place All other aspects of causality i e all other aspects of the production of the effect still remain unintelligible Given a certain cause its effect is only partially conditioned and thus only partially understood as a result of the demand that the netto entropy change be positive realized either directly by the system itself for instance in the case of a falling body or by its immediate surroundings The latter case is the one where the pattern generating dissipative system enters into a state of higher order and thus lower entropy implying a definite pattern and thus differences And these in turn mean increase of stress which more generally can mean an increase in potential energy for example when we stretch out a spring In this particular example the increase in stress already takes place without the increase of order namely just because attracting particles come to lie farther apart and in this way increasing the potential energy The entropy decrease that took place within the pattern generating dynamical system generally becomes a little more than just cancelled in virtue of an entropy increase of the immediate surroundings of the system And because the netto change of entropy is positive entropy increases the area containing the system is relaxed as compared with its state before the pattern was generated And this is already sufficient for the effect to take place where the cause is for example the initial state of a pattern generating dissipative dynamical system and the effect is the generated pattern and its surroundings where the entropy increase has taken place The relaxation does not need to be maximal i e is not extending across the whole Universe Therefore this relaxation can be accomplished in many different ways which implies that the specific and constant way of relaxation the specific effect which is produced by a particular cause in a repeatable way same cause same effect is still not intelligible Therefore causality is still inherently creative even when energy restrictions are included In the next Section we will elaborate still more on the relationship between causality and entropy Causality and the spontaneity of entropy increase Entropy increase is a transition from an unstable more ordered configuration of material elements to a more disordered configuration Two examples are given An example taken from near equilibrium systems is the following A supersaturated solution of a given chemical compound is an unstable configuration of material elements Its overall orderliness is higher than that of the next configuration which is crystal and solution i e the generated crystal now situated in a just saturated solution which belongs to the system Although the crystal is more ordered than any solution heat was given off into its surroundings the solution This causes an increase of the thermal agitation of the molecules in the solution and thus an increase in entropy of that solution And this increase must according to the Second Law of Thermodynamics be such that the netto change of the entropy is positive i e the entropy of the system as a whole crystal solution must increase despite the local increase of order in the system the crystal lattice The same applies to the phenomenon of solidification crystallization of some molten material after super cooling making the configuration unstable a more ordered state appears during crystallization it is true but heat is given off to the environment increasing its entropy and this environment belongs to the system See for authorization NOTE 4 An example now taken from far from equilibrium systems is the death and decomposition of an organism The ordered configuration passes over into a disordered state as soon as the barriers against decomposition are removed resulting in an increase of entropy In itself the far from equilibrium structure is ordered but unstable The fact that entropy increase is spontaneous can be understood because it is equivalent to leveling things out and therefore to relaxation So it is clear that an unstable more ordered configuration intrinsically unstable because it is actively upheld or forced into the more ordered state will when all possible barriers are removed spontaneously transform into a more disordered state of the system as a whole netto increase of entropy because in such a state everything is leveled out at least more so than in the initial state which means that the system as a whole now finds itself in a more relaxed overall condition So it is at least intuitively evident that the system will spontaneously pass over into this more relaxed condition as soon as relevant barriers are removed When we let go a stretched spring it will spontaneously contract This spontaneous transition from an ordered state to a disordered state is sometimes explained as follows Because with respect to a given set of different elements there generally are many many more mathematically possible disordered configurations of the elements of the set than there are ordered configurations the chance that in the case of a re configuration of the elements an ordered configuration will after the fact be seen to have been followed by a disordered configuration is much much larger than the chance that an ordered configuration will after the fact be seen to have been followed by another ordered configuration and certainly also when a disordered configuration is seen to have been followed by an ordered one Such an explanation would only make sense when the initial configuration or state were in determinate as to the nature of the next configuration state i e if every mathematically possible configuration were equally possible physically Then indeed the chance is very big that an ordered configuration state or a disordered configuration for that matter will after the fact be seen to have been followed by a disordered configuration state Where followed by a disordered configuration in fact means followed by one or another disordered configuration because there are so many more disordered configurations of the elements of the given set than there are ordered ones There is however no reason to believe that any given configuration of an intrinsic dynamical system of material elements is in determinate as to the nature of the next state configuration i e it is not so that given a particular state configuration there is more than one next state that can emerge from this given initial state On the contrary we d better assume that in all such cases causality rules over things So the initial state initial configuration of material elements is the cause of the next state next configuration which is the effect This next state is completely determined by the previous state of that same dynamical system One aspect of this being completely determined is that the system will necessarily relax as soon as barriers to do so are removed So in our case of an unstable ordered configuration of material elements where the barrier for spontaneous transformation is removed which is the cause we will obtain a disordered configuration which is the effect This is the part of the effect being completely determined demanded by thermodynamics The other part of this being determined consists in the fact that from a particular i e given initial configuration of material elements a particular disordered configuration will follow instead of any other disordered or ordered for that matter configuration i e one specific disordered configuration out of the many that are mathematically possible This constant relation between a particular initial ordered configuration and a particular disordered configuration instead of some other disordered configuration is the creative genuinely productive and therefore non intelligible aspect of causality In accordance with this another particular initial configuration will give another particular configuration as its next state All these considerations are based on the validity of the Second Law of Thermodynamics What follows is taken from a university manual written by Van MIDDELKOOP 1971 pp 58 This Second Law can be expressed in terms of heat engines in the following way Q 1 heat supply to engine Q 2 heat fiven off to the environment W work done by ideal machine The work done by the machine would be maximal when Q 2 0 Then we would have Q 1 W which means that all the imported heat were transformed in work But this is never the case because in say a steam engine the used steam always still possesses a considerable amount of heat Q 2 is a function of T 2 If T 1 and T 2 are absolute temperatures then it turns out that Q 1 Q 2 T 1 T 2 And if T 2 0 absolute zero temperature then Q 2 0 Only in that case in the ideal engine no heat leakage no friction all heat is transformed into mechanical work Apart from this there is even in an ideal engine after work has been done always some heat left that must be exported Carnot 1796 1832 defined the ideal heat engine which has no internal friction and working only on the basis of a temperature difference Schematically Let us now calculate the efficiency of an ideal steam engine using the above relation Q 1 Q 2 T 1 T 2 Suppose this engine works at T 1 100 0 C while T 2 15 0 C T 2 is the temperature at which the used steam is condensed T 1 373 0 K T 2 288 0 K so the efficiency is 1 288 373 0 23 that is to say the efficiency of even an ideal heat engine is only 23 A real engine will as a result of losses have a still lower efficiency The carnot engine can also be reversed By supplying mechanical work the exit temperature T 1 is higher then the entrance temperature T 2 Schematically This is the case of the refrigerator and the air conditioner A refrigerator cools its contents and heats the room in which it stands thus reversing the flow of entropy and increasing the order within the refrigerator but only at the expense of the increasing entropy of the power station producing the electricity that drives the refrigerator motor The entropy of the entire system refrigerator and power source must not decrease and in practical matters will increase Here there is a flow of heat from lower temperature to higher temperature but this flow is not accomplished in a self acting way i e it is not spontaneous it is driven Now we can formulate the Second Law very concisely by means of the concept of entropy The following discussion will result in this formulation The entropy S is a quantity that is only dependent on the amount of transported heat and the temperature T at which this transport takes place and can be defined by its change The change of entropy dS is then equal to the amount of transported heat dQ divided by the temperature T at which this change occurs The Second Law of Thermodynamics now is equivalent to the following statement In a physical or chemical process the entropy S i n c r e a s e s until equilibrium has been reached And this statement is equivalent to Heat only s p o n t a n e o u s l y flows from higher to lower temperatures This latter statement will be made clear as follows Consider two systems with initial temperatures T 1 and T 2 where T 2 T 1 where means greater than whereas means smaller than and where these systems are in contact with each other Suppose that initially S S 1 S 2 i e the total entropy is the entropy of the first system plus the entropy of the second system Because of the temperature difference there is a flow of heat of system 2 to system 1 This means that S 1 increases with an amount dS 1 dQ T 1 when there is an amount of heat dQ transported from 2 to 1 And because system 2 loses heat we have dS 2 dQ T 2 i e dS 2 is minus dQ T 2 So dS the change of S with respect to the total system 1 2 dS 1 dS 2 dQ T 1 dQ T 2 1 T 1 1 T 2 dQ And this is a positive quantity i e dS 0 because T 2 T 1 therefore 1 T 2 1 T 1 and therefore 1 T 1 1 T 2 0 Further we know that dQ 0 so 1 T 1 1 T 2 dQ dS 0 So the entropy increases until T 1 T 2 equilibrium If the heat had spontaneously flown the other way around from something having a lower temperature to something having a higher temperature dS would be negative and thus contradicting the Second Law Indeed when T 1 T 2 1 T 1 1 T 2 so 1 T 1 1 T 2 0 and therefore 1 T 1 1 T 2 dQ 0 and thus dS 0 the entropy has become constant Entropy can be considered a measure of chaos disorder Van MIDDELKOOP G 1971 p 58 60 Entropy figures in the Second Law of Thermodynamics What then is the First Law If we add to a system for example a cylinder filled with gas below a piston a small amount of heat dQ and do a little amount of work dA to the system for instance by compressing the gas then as a result of this the energy U of the system must have increased with dU dQ dA In fact we should write dQ and dA with the greek letter delta because only dU is a complete differential For the sum of a series of successive small quantities increments like these we use integrals because the series of increments need not to be regular We can also say that expressing them as integrals is a generalisation of dQ dA dU etc If the change of the system not small for instance when we add much heat and compress the gas strongly then the added heat Q and the added work A will be expressed as these integrals and we get 1 where 1 and 2 respectively indicate the initial state and the end state and where the sum of the small energy changes is the energy difference between the end state and initial state This i e expression 1 is called the First Law of Thermodynamics which formulates the law of energy conservation in the science of heat It says that an increase in energy of a system must be the result of the addition of heat energy or of work it can not come out of the blue It also says that if we want to obtain work from some given system the energy of the system must be increased It is therefore impossible to give off work from nothing to yield work from nothing A machine which could realize this is called a perpetuum mobile of the first kind The First Law of Thermodynamics is thus equivalent to saying that such a perpetuum mobile is impossible Causality and the probabilistic approach to entropy Above we discussed the relation between causality and spontaneous entropy increase Entropy however is often explained in terms of probability We already stated that this approach is inconsistent with the notion of causality which notion implies strict determinism A particular cause has a particular effect in a repeatable way same cause same effect The particular effect thus necessarily comes out of the given cause The cause produces its effect The nature of this production is ultimately unintelligible because it is essentially creative The probabilistic approach on the other hand presupposes indeterminism It is correct as a method but incorrect as an ontology In fact it is an epistemological approach It sets out with a given state of a certain kind of system namely a system consisting of an astronomical number of interacting particles for example molecules and because it is impossible to assess the adventures of each molecule a statistical consideration involving probabilities is necessary in order to predict the change of macroscopic features of the system such as temperature pressure volume etc Before elaborating on this point let us first emphasize a few things To analyse causality we must consider causality as the necessary one way connection between two states of a process And because all real world processes are regular i e according to certain rules or laws causality is the necessary one way connection between one state of a dynamical system and the next state In all real world systems of interacting elements entropy is involved When heat plays a role in a system then the system is not just a dynamical system i e a system just involving forces and potentials but a thermodynamic system We can also say that if we include entropy into the consideration of a system then a purely dynamic description is not sufficient it must be expressed in the form of a thermodynamic description even when no heat is involved in this system The definition of entropy is then accordingly given in terms of probability instead of in terms of heat and temperature A dynamical system of interacting elements can be either intrinsic or extrinsic It is intrinsic when the interaction originates from the elements themselves not in the sense of what relevant features letting the elements interact are given i e distributed among and attributed to these elements from without but in the sense that the relevant features are features that inseparably belong to these elements and let these elements interact This is for instance the case in a system of interacting molecules These molecules because of their very nature attract and repel one another depending on their distance from one another On the other hand when we have a set of ideal billiard balls at rest on an ideal billiard table which means that table and balls will not experience any friction when balls are rolling and we shake the table vigorously i e supplying the system with kinetic energy the balls start to move and will interact with each other by collisions Although the balls interact with each other they do not do this in virtue of their own relevant properties their gravitational attraction can be neglected Because of the supposed absence of friction and the supposed presence of perfect elasticity the balls will go on interacting indefinitely on the table Such a system of interacting elements is extrinsic position and velocity are given to them from an outside source Only their mass is intrinsic but doesn t change during the interactions It is a constant codetermining the momenta of the balls So each state of this system consists of a spatial distribution of different momenta of the balls And such a state immediately and completely determines the next state of the system Because the just described dynamical system is not intrinsic the notion of the system s relaxation toward which the system spontaneously tends and which should form the kernel of causality seems to be irrelevant It is however possible to find a good analogue of relaxation in such a non intrinsic system of interacting elements It is such a non intrinsic system of interacting billiard balls that is often used for explaining entropy increase Often in such explanations no reference to causality is made The problem as it is stated above namely the problem of the relationships between causality relaxation determinism and probability turns out to be a very difficult one It certainly cannot be settled here once and for all During the ensuing investigation of the mentioned relationships it might turn out to be necessary to slightly change our earlier notion of causality and with it the role attributed to relaxation in order to eliminate the above mentioned inconsistency between the probabilistic approach and the nature of causality Regarding this probabilistic approach we will show below that at first the presupposed indeterminism is not seen as a truely ontological indeterminism but only as an epistemological indeterminism i e not expressing an objective and intrinsic partial indeterminism between the states of a real world dynamical system but expressing a degree of unpredictability for some investigating mind Although individual outcomes cannot be predicted the chances that certain behaviors will actually materialize can be assessed often with great precision by such an investigating mind on the basis of statistical methods But as we will see shortly there are indications that the partial indeterminism as presupposed by the probabilistic approach might turn out to be an intrinsic aspect of real world dynamical systems and therefore also of causality which means that it might be an ontological partial not just epistemological indeterminism after all The reason that in solving the above mentioned inconsistency we resort to amending our concept of causality instead of rejecting the ontological significance of the probabilistic method is the very nature of chaotic dynamical systems unstable dynamic systems The slight change in our notion of causality as held on this website will consist in the assumed fact that relaxation is not as such the driving force nor is it as such a conditio sine qua non necessary condition for causality i e for an effect to be produced by the cause It is this only in a statistical sense Unfortunately if the latter is true it decreases still more the understanding of the nature of causality Let us now work out in more detail the things we ve just succinctly suggested The probabilistic approach and its explanation of the increase of entropy is itself often explained by using a system of colliding billiard balls while supposing all friction to be absent implying that the system once set in motion continues to run indefinitely and having in the back of one s mind a system of physically interacting molecules of a gas Because we do not know precisely the momenta NOTE 5 and the positions of the balls molecules at any one moment in time we cannot follow each ball molecule particle as it is moving and colliding in relation to the other balls because there are supposed to be so many of them and also because they are supposed to have microscopic dimensions So we cannot predict the ensuing sequence of spatial patterns of the balls as a result of their motion and collisions Therefore we subdivide the billiard table into a number of equal areas boxes within which we do not specify the exact positions of the balls and also not the pattern of momenta associated with these balls The pattern of the balls configuration of system elements specified as to which balls are present in which box sub area of the billiard table is considered as having appeared after the previous pattern while the present pattern appears before the next pattern We assume that when the system of balls is set in motion by shaking the table for example every mathematically possible configuration of elements the distribution of the balls over the boxes equal sub areas of the billiard table has an equal chance to show up after some predetermined running time of the system This equal chance is connected with the fact that the system is randomized by the collisions of its elements each collision involves the leveling out of momenta of the colliding balls When we now categorize the possible patterns of the balls expressed by their presence in one or another box of the billiard table such as to distinguish between ordered configurations for example all balls in one box and disordered configurations balls more or less uniformly distributed over all the boxes we will find that there are many more mathematically possible disordered patterns i e patterns that are disordered in one way or another than there are ordered patterns i e patterns that are ordered in one way or another And it is now possible to assign a corresponding chance to each of these categories a chance that they will actually appear after some predetermined running time And indeed the chance that we will see at that prespecified time one of the disordered configurations of balls is much greater than the chance that one of the ordered patterns will appear at that prespecified time So statistically a system will move from an ordered state to a disordered state which is equivalent to the relaxation of the system This then is the statistical probabilistic approach which makes it possible to do predictions after all In and by this approach we invoke some degree of indeterminism of a next state i e next distribution of the balls over the boxes with respect to the present state that is to say this next state is not totally determined by the nature of the present state present distribution of the balls over the boxes But this indeterminism is just the expression of the fact that we do not possess detailed relevant knowledge It is just epistemological indeterminism and not the assumption of an ontological indeterminism Many offered textbook explanations of what entropy actually is have not always been clear on this point i e whether the encountered partial indeterminism is just epistemological or whether it is ontological as well in the sense of objective and intrinsic indeterminism of the processes as they are in themselves Statistical Mechanics compresses all the information of the molecular level into a few macroscopic thermodynamic variables such as temperature pressure volume and entropy Thermodynamical behavior originates from or is a manifestation of atomic and molecular events The latter can only be described and indeed very well statistically because of their smallness and vast numbers A system of molecules of a gas ultimately visists every possible state i e every configuration of positions and momenta that is relevant to the given dynamical system of interacting molecules and thus visits every point of its phase space See note 7 And because an initial condition initial state can never be assessed exactly NOTE 6 and also because of the often occurring high sensitivity of initial conditions with respect to the system s subsequent behavior thus the occurrence of unstable systems a statistical approach must be followed The initial condition of such a dynamical system must then be represented not by a point in the phase space NOTE 7 of the system but by a blob that is to say by a certain volume within phase space Epistemologically we have partial indeterminism but not ontologically And when we have an ergodic system which is a system that explores all its phase space we can detect an arrow of time but only when we treat such a system statistically evolution of the initial blob which represents the initial probability distribution function of possible states in the phase space of such a system In a single trajectory thus departing not from a blob but from a single point in phase space no time going in one direction only is visible because in an ergodic system this trajectory apart from the fact that a single trajectory cannot be seen in the practice of natural science is not necessarily representing a successive sequence of system states which become more and more disordered but roams about in phase space in a more or less erratic manner i e not smoothly from ordered to more and more disordered states Only the average behavior that is to say the behavior having the highest chance to be materialized by the system indicates a steady increase in disorder of the successive states represented by the system s average trajectory expressed by the evolving blob Systems that are confined to a small area of their phase space non ergodic systems will eventually repeat their behaviour Poincaré s return which means that there is no arrow of time and no ultimate relaxation of the system This is the case in an ideal pendulum i e a pendulum without friction Its evolution can be depicted by a trajectory in its phase space and this trajectory has the form of an ellipse Once the ideal pendulum has received energy it keeps on swinging indefinitely exchanging within it potential energy and kinetic energy So this is a purely dynamical system in contrast to a thermodynamic system No entropy increase is involved Such a system however does not occur in reality There we have friction and thus dissipation of energy to the environment resulting in the total energy of the pendulum to steadily decrease until it has become zero and the pendulum comes to rest The investigations of Boltzmann end 19th century Prigogine second half of 20th century and many others boil down to an objective derivation of the irreversible from reversible mechanics that is to say from the behavior of large numbers of atoms and molecules Roughly they amount to the detection of leveling out tendencies already in the atomic molecular behavior The processes of mixing and diffusion and other like processes involve an enormous number of particles Further such processes can be super unstable where infinitesimal differences in starting conditions result in exponentially diverging trajectories in phase space and thus in totally different end states of such a process Therefore in studying the behavior of such systems as they are in the real world statistical methods cannot be avoided And this we see in the works of the above mentioned authors If we want to understand causality at work in these systems where it is expected to appear in its most general and naked form we must reduce the macroscopic behavior of the system to the interaction of its constituent particles elements of the system But all we are able to know about the behavior of those interacting particles is based on the results of statistical investigations observations and simulations mathematical simulation and computer simulation because this is all we have We depart from the idea that a system evolves from an initial state to equilibrium Equilibrium here means that maximally leveled out states follow one after another that is to say the system goes at equilibrium from one maximally leveled out state to another such state The entropy is then maximal If we want to investigate the relationship between causality and dynamic behavior because somewhere in the latter we are expected to find causality it is best to investigate all this there where this relationship certainly at first sight is most obscurely represented in order not to be misled by investigating tidy systems Well this is the case in unstable systems chaotic systems such as so called K flows which as such are probably wide spread in nature NOTE 8 In such systems even very small differences in initial conditions initial states lead to very large differences in the system s long term evolution Statistically this is expressed by the fact that upon evolution the initial small blob statistically representing the system s initial state in phase space and covering only a small area of that phase space drastically changes shape while maintaining its volume with a twisting shape and long branches it now reaches all areas of phase space Important for our problem i e the relationship between causality and dynamic behavior including entropy increase is the just mentioned fact of the divergence of trajectories in the phase space not to be confused with the actual trajectories of particles in 3 dimensional space of such a chaotic system These trajectories can be simulated but only in a limited way because also in the computer only rational numbers are possible For simulating stable systems we can happily round off to rational numbers but not for simulating unstable systems In the case of molecules which form real and chaotic systems like the interacting molecules in a gas however the concept of trajectories in phase space implying point like starting conditions must be abandoned Instead the starting condition of such a system must be indicated by a small area a blob in phase space This blob represents a large number in fact an infinite number of point like starting conditions of the system which differ only slightly from each other The blob in fact represents a probability distribution of these point like starting conditions When the system starts to run this blob evolves i e a bundle of probable trajectories with different probabilities emanates from the blob While its volume remains the same its shape changes as has been stated earlier eventually resulting it to be almost everywhere present in phase space that is to say there is no longer a large area of phase space representing possible states that are never visited by the system as in non ergodic systems This results from the strongly diverging phase space trajectories of such an unstable system where a trajectory in phase space is a sequence of system states states that are successively passed through visited by the system Regarding causality this divergence is rather strange The cause is a give state of the system while its immediate effect is the next state But because the states are coupled by each other by the causal nexus A B C D E F we can also say that A is a cause and E the longer term effect of A Now the effect comes out of the cause and thus we would expect that causes that are very similar to each other especially the very simple causes like spatial configuration of positions and momenta determining the next configuration will have correspondingly similar effects In our case we must think as follows While A eventually causes E a cause that is very similar to A should cause something that is very similar to E But unstable dynamical systems show that this need not necessarily be so Very similar causes can have very dissimilar effects in the sense of effects that differ very strongly from each other reflected by the diverging trajectories in phase space REMARK The key assumption in statistical mechanics is that of truly random behavior which is to be expected in chaotic systems In order to show this random behavior an evolving point in the phase space of such a system must eventually visit every point of that phase space COVENEY HIGHFIELD The Arrow of Time p 270 Figure 30B of the 1991 Flamingo edition This Figure is reproduced below Figure above A Phase space portrait of a pendulum for small swings which constitutes an integrable dynamical system The trajectory is confined to a very small region of phase space B Phase space portrait for a collection of molecules in a gas Here the trajectory probes every part of phase space the motion is ergodic The trajectory of a chaotic system and a collection of molecules in a gas can be assumed chaotic in phase space does not in fact visit every point of phase space but the system will eventually come arbitrarily close to every point of the energy surface which I take to be phase space PRIGOGINE I From Being to Becoming 1980 p 33 After COVENEY HIGHFIELD The Arrow of Time 1991 End of REMARK The fact that two initial states howsoever similar to each other can give rise to strongly diverging trajectories in phase

Original URL path: http://www.metafysica.nl/ontology/general_ontology_29m5.html (2016-02-01)

Open archived version from archive - General Ontology XXIXm5a

complete cycle discarding none This is impossible The reason The Second Law of Thermodynamics Like any other fundamental law in physics it is confirmed by the circumstances that no exception to it has ever been found We shall encounter the second law of thermodynamics in several different but equivalent formulations We have already encountered the second law as it relates to the behavior of a system of numerous particles which always proceeds to states of greater disorder Our first statement of the second law of thermodynamics is as follows No heat engine reversible or irreversible operating in a cycle can take in thermal energy from its surroundings and convert all this thermal energy to work That is to say Q h Q c W Where Q h is the heat supplied to the engine Q c is the heat handed over to the environment and W is the work performed by the ideal engine i e an engine with no friction For any cyclic engine Q c 0 and e th 100 percent Consider now a heat engine run in reverse as a heat pump See next Figure Figure above a Generalized form of a heat pump b Energy flow for a heat pump operating between the temperatures T h and T c During each cycle work W is done on the system heat in the amount Q c is extracted from the low temperature reservoir and heat in the amount Q h is exhausted to the high temperature reservoir The net effect is that heat is pumped from the low to the high temperature reservoir Note that the thermal energy Q h delivered to the hot reservoir is greater than the thermal energy Q c extracted from the cold reservoir because W is added This follows from the first law of thermodynamics Let s see Above we found out how to express the First Law when the heat exhaust heat intake and work take place more or less instantaneously DELTA U Q W where Q is the netto heat taken in And when emphasizing that the work W is done on the system we write DELTA U Q W on And because we consider one complete cycle where thus the system has returned to its original state DELTA U 0 i e no net change in internal energy So we get 0 Q W on In the heat pump heat in the amount of Q c is taken in from the cold reservoir see drawing above while eventually more heat i e more heat than was taken in in the amount of Q h is exhausted to the hot reservoir So the netto amount of heat taken in is Q c Q h So Q Q c Q h because Q as it figures in the First Law as just stated is the net heat supplied to the system In fact Q is just the supplied heat but in the context of heat engines or heat pumps we must make a difference between initial heat intake and net heat intake And now the above equation is equivalent to 0 Q c Q h W on which in turn is equivalent to Q h Q c W on where Q h is exhausted heat and Q c is heat supplied to the pump This formulation looks like but is not the same as the formulation given above for the second law Nevertheless we recognize this as the Second Law because when we reverse all inherent signs of the terms that is to say heat in heat out heat out heat in W on W by heat pump heat engine we obtain the first formulation of the second law The heat pump is effectively a refrigerator It removes thermal energy from the cold reservoir If this reservoir were to have a noninfinite heat capacity its temperature would fall An equivalent statement of the second law of thermodynamics can then be given in terms of the general properties of a heat pump No heat pump reversible or irreversible operating over a cycle can transfer thermal energy from a low temperature reservoir to a higher temperature reservoir without having work done on it For any cyclic heat pump W in W on 0 This statement of the second law tells us that if a hot body and a cold body are placed in thermal contact and isolated it is impossible for the hot body to get hotter while the cold body gets colder for this work is needed even though this would not violate energy conservation or the first law of thermodynamics The observed fact that when a hot object and a cold object are brought together they reach a final temperature between the initial temperatures is an illustration of the second law Heat can spontaneously flow only from a hot body to a cold body The Carnot cycle Lazare Carnot the father of the french engineer Sadi Carnot the latter 1796 1832 who had produced an influential description of mechanical engines concluded that in order to obtain maximum efficiency from a mechanical machine it must be built and made to function to reduce to a minimum shocks friction or discontinuous changes of speed in short all that is caused by the sudden contact of bodies moving at different speeds In doing so he had merely applied the physics of his time only continuous phenomena are conservative All abrupt changes in motion cause an irreversible loss of the living force Similarly the ideal heat engine instead of having to avoid all contacts between bodies moving at different speeds will have to avoid all contact between bodies having different temperatures The cycle for a good heat engine therefore has to be designed so that no temperature change results from direct heat flow between two bodies at different temperatures Since such flows have no mechanical effect they would merely lead to a loss of efficiency The ideal cycle is thus a rather tricky device that achieves the paradoxical result of a heat transfer between two sources at different temperatures without any contact between bodies of different temperatures It is divided into four phases During each of the two isothermal phases the system is in contact with one of the two heat sources and is kept at the temperature of this source When in contact with the hot source it absorbs heat and expands When in contact with cold source it loses heat and contracts The two isothermal phases are linked up by two phases in which the system is isolated from the sources that is heat no longer enters or leaves the system but the temperature of the latter changes as a result respectively of expansion and compression The volume continues to change until the system has passed from the temperature of one source to that of the other Along these lines Sadi Carnot recognized that of all possible heat engines operating between two temperature extremes the most efficient was a reversible one that would to describe it again operate as follows Receive thermal energy isothermally from some hot reservoir maintained at a constant temperature T h Reject thermal energy isothermally to a cold reservoir maintained at a constant temperature T c Change temperature in reversible adiabatic processes Such a cycle which consists of two isothermal processes bounded by two adiabatic processes is known as a Carnot cycle See next two Figures Figure above A Carnot cycle consisting of two reversible adiabatic and two isothermal processes operating between the temperatures T h and T c The thin black curved lines are isotherms meaning that along such a line the temperature does not change means temperature increment or decrement means heat increment or decrement The area light blue enclosed by the loop is equal to the work W performed by the cycle Figure above A Carnot cycle consisting of two reversible adiabatic and two isothermal processes operating between the temperatures T h and T c The thin black curved lines are isotherms The area light blue enclosed by the loop is equal to the work W performed by the cycle Earlier we spoke about the thermal efficiency of any heat engine over a complete cycle So also for the Carnot cycle the thermal efficiency is related to the heats in and out as follows The thermal efficiency by the way of any reversible cycle including the Carnot cycle is independent of the working substance steam air or whatever So the ratio Q c Q h does not depend on the working substance Therefore if the engine operates in a Carnot cycle the ratio Q c Q h can depend only on the temperatures T h and T c at which the heat enters and leaves the system In the two adiabatic steps there is no heat exchange at all with the environment Q 0 So we can write We could say that this is the definition of a reversible Carnot cycle because it is one when this is the case By combining the two expressions we can write the thermal efficiency of a Carnot cycle in terms of temperatures as This equation gives the maximum maximum because it is about a Carnot cycle thermal efficiency attainable for any engine operating between the temperatures T h and T c We see that it is 100 percent e th 1 only if the engine exhausts heat to a cold reservoir at and remaining at the absolute zero of temperature clearly an impossibility It is important to realize that the impossibility of 100 percent efficiency as established here is not because of friction because we here consider ideal engines that is engines without friction Heat engines typically have very low efficiency For example if an engine takes in heat at the high temperature 200 0 C and exhausts heat at room temperature of 30 0 C T h 473 K T c 303 K its maximum efficiency is e th 1 303 473 36 In any real engine friction is present the processes are not perfectly reversible and the operating cycle is not a Carnot cycle Consequently the actual efficiency is even less Now we will show WEIDNER Physics 1989 p 478 that the Carnot cycle is the most efficient of all reversible cycles operating between two fixed temperature extremes Consider the cycle shown in the next Figure left image Figure above Left image A non Carnot cycle operating between T h and T c Right image The reversible expansion can be approximated closely by a series of adiabatic and isothermal expansions From point a the system expands reversibly along the line ab neither an adiabatic nor an isothermal path as the temperature decreases from T h to T c If as is c a a b did not have exchanges with the environment i e if the system were thermally isolated not only during c a but also during a b then the a b path would be the a c path i e when c a was done it would reverse But the a b path is clearly not the a c path so the system is not thermally isolated during ab which means that ab is not adiabatic And because the temperature changes during ab it is also not isothermal The path ab is followed by an isothermal compression to point c and then adiabatic compression which returns the system to starting point a How does this reversible cycle compare in efficiency with a Carnot cycle between the same temperature extremes The right hand image of the above Figure shows how the reversible expansion can be approximated as closely as we wish by a series of small isothermal and adiabatic steps We then replace the reversible cycle of the left image of the Figure by the small adjacent Carnot cycles shown in the right hand image The efficiency of any one of these small Carnot cycles depends on its upper and lower temperatures T h and T c according to e th 1 T c T h But in the right hand image of the above Figure the upper temperature T h of any small Carnot cycle is generally i e at least some of them are so not as high as T h similarly in another non Carnot cycle the lower temperature T c need not be as low as T c With T h equal to or lower than T h and T c equal to or higher than T c the overall efficiency of the whole reversible cycle must be less than the efficiency of a Carnot cycle between T h and T c Thus we can write where T c and T h are the temperature extremes of the working substance in the engine Entropy WEIDNER pp 479 Again we will elaborate on the so important thermodynamic variable the entropy of a system As will be pointed out further below the entropy is a quantitative measure of the disorder of the many particles that compose any thermodynamic system First we will starting with the Carnot cycle reason our way to a definition of entropy in fact a definition of entropy change and from there we will in a next Section arrive at the formulation of the Second Law of Thermodynamics in terms of entropy Earlier we had established the following with respect to the Carnot cycle If the engine operates in a Carnot cycle the ratio Q c Q h can depend only on the temperatures T h and T c at which the heat enters and leaves the system In the two adiabatic steps there is no heat exchange at all with the environment Q 0 So we can write From this relation we can derive another very important relation by a simple mathematical manipulation The latter equation means that for a reversible Carnot cycle the ratio of heat to temperature which ratio is called the reduced heat is the same for both the isothermal expansion and isothermal compression In the adiabatic steps there is no Q Or in otherwords The ratio of heat in and the temperature at which this heat was taken in and at which isothermal expansion takes place is equal to the ratio of heat out and the temperature at which this heat was taken out and at which isothermal compression takes place Or also In a Carnot cycle the intake of reduced heat Q T is equal in magnitude to the exhaust of reduced heat Q T See next Figure In the analysis that follows we adhere to the following sign convention Heat entering the system is positive Heat leaving the system is negative Using this convention we then have for the Carnot cycle Thus for a Carnot cycle the sum of the quantities Q T around a closed cycle is zero This rule is actually more general It holds for any reversible cycle as we shall now show Consider the reversible cycle shown in the next Figure Figure above A reversible cycle approximated by Carnot cycles light blue yellow green Any reversible cycle can be approximated as closely as we wish by a series of isothermal and adiabatic processes That is a reversible cycle is equivalent to a series of junior Carnot cycles We can for example roughly approximate the cycle in the above Figure by several adjacent Carnot cycles The above equation holds for each of these Adding the equations for the individual small Carnot cycles that approximate the original reversible cycle we have We see that no heat enters or leaves the system apart from the processes at the perifery Q 1 in Q 1 out Q 2 in Q 2 out Q 3 in Q 3 out Therefore we can write the last equation more generally as where Q stands for the netto heat intake Q 1 in Q 1 out etc where the signs are already accounted for and T stands for the temperatures at which intake or exhaust of heat took place The summation is taken around the perifery of the original cycle In the limit i e when we have taken the smallest possible junior Carnot cycles in order to obtain a most accurate approximation we can then write The circle on the integral sign indicates that the integration is to be taken around a closed path We can call this integral a loop integral In words the last expression says that for any reversible cycle the sum of the quantities giving the ratio dQ T of the heat dQ entering the system to the temperature T at which the heat enters is zero around the cycle This is equivalent to saying that the integral i e now the path integral of dQ T between any initial state i and any final state f is the same for all reversible paths from i to f Let s explain this If we add up the two paths of the above Figure paths both starting from i and both ending up at f while with our adding starting at i and going around the whole loop and thus ending up at i again we get when we take the inherent directions of the two paths into account P 1 P 2 going around the whole loop 0 This is equivalent to P 1 P 2 0 which is equivant to P 1 P 2 So along the path P 1 equals along the path P 2 between the same end points i and f We will now proceed further to arrive at a macroscopic definition of entropy change by using an anlogy With a reversible cycle a Carnot cycle or a non Carnot cycle we have to do with a process course going from some starting point and finally ending up at this same starting point again and where the summation of some quantity is equal to zero Precisely the same is the case of a conservative force And this gives us an idea of how to define entropy macroscopically So we exploit a mathematical property that obtains in the relation between a conservative force F which is a vector and the associated potential energy U of the system For example the wind is a conservative force field If we when taking a ride experience fair wind we pay for that in terms of unfair wind when we return along whatever path back to where we started from The potential energy difference between two end points i and f is related to the conservative force by where F force and r way in the direction of the force are vectors If U f is smaller than U i then we have a force In the formula U f is considered larger than U i so there must be a minus sign before the integral This relation can be written however only if the force is conservative and with the net work force x way done by the conservative force equal to zero over a closed loop In like fashion we may define a thermodynamic quantity called the entropy S whose difference depends only on the end points By definition indicated by Note that the integration may be carried out along any reversible path leading from i to f This equation reduces to which was established above when i f around a closed loop Now suppose that some system proceeeds irreversibly from state i to f We cannot represent any irreversible process by a path on a pV diagram Nevertheless we can determine the entopy difference between the states i and f We simply imagine the system to pass from i to f along a reversible path connecting the two end points and compute the change in entropy using the above formula defining entropy difference This is allowed because the entropy difference depends only on the end points as established above when discussing different paths between the same end points not on the path Analogously in mechanics we can evaluate the potential energy difference between two points even when a nonconservative dissipative force also acts and the system is not able to pass reversibly between the end points We do this by computing the work done by the conservative force alone In general when a system is taken round a complete reversible cycle and returned to its initial state i the following changes occur The net change in internal energy is zero DELTA U 0 The net change in entropy is zero DELTA S 0 The work W done by the system is equal to the area enclosed by the loop on the pV diagram By the First Law of Thermodynamics the net heat Q entering the system is then Q W In the case of a cycle as we see it in a heat engine we must speak of net heat entering the system because we can suppose that there is also heat leaving the system which means that the total heat entering the system is greater still The first law is indeed about an energy balance DELTA U Q W by first law DELTA U 0 0 Q W by Q W by Now suppose that a system is taken through an irreversible cycle and returned to its initial state i The change in internal energy again is zero DELTA U 0 The change in entropy is also zero DELTA S 0 The system has done work in the amount W but it is not representable by any area on a pV diagram Once again net heat entering the system is Q W Entropy and the Second Law of Thermodynamics Having obtained a definition of entropy change we can now state the Second Law of Thermodynamics in terms of entropy For an isolated system the total entropy remains constant in time if all processes occurring within the system are reversible On the other hand the total entropy of an isolated system increases with time if any process within the system is irreversible Since all actual macroscopic systems undergo irreversible processes the total entropy of any real system always increases with time It is easy to show the equivalence between this statement of the Second Law and the one given earlier based on heat engine behavior which read The heat Q h supplied to the engine is equal to the sum of heat Q c given off to the environment and work W by done by the engine meaning that not all heat supplied to the engine can be converted to work Q h Q c W by Consider a system composed of a hot reservoir at temperature T h a cold reservoir at temperature T c and a heat engine operating between the two heat reservoirs as shown in the next Figure Figure above The system consisting of the heat engine together with the hot and cold reservoirs chosen in applying the second law of thermodynamics to heat engines and in computing entropy changes The engine may be either reversible or irreversible For each complete cycle of the heat engine the total change in the entropy of the entire system the heat engine and its surroundings is accounted for as follows The hot reservoir loses entropy because heat Q h leaves the reservoir DELTA S hot reservoir Q h T h supplied reduced heat to the engine The cold reservoir gains entropy because heat Q c enters the reservoir DELTA S cold reservoir Q c T c exhausted reduced heat from the engine The heat engine alone undergoes no change in entropy since it is returned to its initial state with the same entropy after completing a cycle whether the engine is reversible or not Work is done by the engine and energy leaves the system But no entropy leaves or enters the system since ordered energy associated with work has no entropy content Adding all contributions we find for the total entropy change DELTA S of the system The change in entropy net gain of entropy is thus equal to the net reduced heat that is exhausted to the environment which here is the cold reservoir Now recall the definition of the thermal efficiency of any heat engine Earlier we proved that no engine operating between the two fixed temperatures T h and T c could be more efficient than a reversible Carnot engine whose efficiency is Therefore or rewritten see below That is to say the exhausted reduced heat is equal to or greater than the imported reduced heat So the first equation obtained above then gives which is what we set out to prove namely the equivalence of the formulation of the Second Law in terms of the features of a heat engine supplied heat exhausted heat and work done and its formulation in terms of entropy change The rewriting done above can be explained as follows We had the inequality equal to or smaller than Subtracting 1 from both members of this inequality yields the equivalent inequality From this inequality equal to or smaller than we get an equivalent inverse one equal to or greater than by reversing the inequality sign and changing the minus sign into a plus sign The above Figure shows that if we change the signs of both members of an inequality we get an equivalent relation if we also reverse the inequality sign so we get If we turn the quotients upside down then in order to obtain an equivalent relation we must inverse the inequality sign The following drawing makes this clear So we get Dividing both members by T h gives the equivalent relation Multiplying both members by Q c gives the equivalent relation which of course is equivalent to Which is indeed the result of the above rewriting A note on equilibrium Earlier we defined thermodynamic equilibrium in terms of mechanical chemical and thermal equilibrium In thermodynamic systems we can concentrate on the latter Thermal equilibrium means that all the parts of the system have the same temperature Only then can a temperature of the system as a whole be defined So a system can be at equilibrium while having a temperature T 1 But it can also say later be in equilibrium while having a higher or lower for that matter temperature T 2 When the process is not too fast the system can go from a T 1 equilibrium state to a T 2 equilibrium state So the system can be in equilibrium even if its temperature changes provided it goes slowly In the case of an engine operating as a reversible Carnot cycle we can say that all four processes isothermal expansion adiabatic expansion isothermal compression adiabatic compression can be represented as lines in a pV diagram So the system passes through a long succession of equilibrium states and only equilibrium states when it goes through one cycle The motor that makes this cycling through equilibrium states possible is the temperature difference between the hot reservoir and the cold reservoir which take care for intake and exhaust of energy into and from the engine When we heat a layer of liquid from below a temperature gradient will be set up in the liquid And as long as this gradient exists the liquid in not in thermodynamic equilibrium By intensifying the heating from below the liquid will be pushed further and further away from equilibrium Finally when the liquid is no longer capable of transporting the heat from bottom to top by conduction alone its molecules spontaneously organize such that now the heat can be transported by convection in which the molecules move over macroscopic distances Such systems being pushed far away from equilibrium and then reaching some critical point after which they self organize will be discussed in a later document that is about far from equilibrium thermodynamics The processes involved in far from equilibrium situations occur in organisms but also in the generation of dendritic crystals as seen in snow adding another element to the crystal analogy But in a Carnot cycle the system passes through equilibrium states only which differ from each other with respect to pressure volume and temperature But in every single moment the parts of the system have equal temperatures that is to say the system is in equilibrium Microscopic consideration of entropy Macroscopically entropy is stated in terms of heat and temperature both are macroscopic variables The microscopic consideration states entropy in terms of disorder To characterize the concept of temperature it was sufficient to involve the average energy of the molecule Entropy is about the way by which this energy is distributed among the particles The entropy S indicates the degree of randomness More precisely S k ln P in which ln is the natural logarithm P is the number of ways by which a configuration can result by interchanging the roles of the different particles permutation or equivalently P is the probability in terms of the number of permutations that the system will be in the state it is actually in compared with all other possible states k is the Boltzmann constant So for the most improbable configuration for instance one particular i e chosen particle having all the energy of the system while all the other particles have zero energy we have S 0 because P 1 which means that there is only one way to formally accomplish this configuration all energy must go to that chosen particle And because log 1 0 we get S 0 If we speak of the probability of a state we must realize that for every state there is only one way to achieve it not in the physical sense but formally in the sense of a pattern of distribution of velocities or positions among the particles So when we say that a disordered state is more probable than an ordered one we must not consider particular individual states but categories configuration categories of states Now it is the case that when we describe a given state only as belonging to the category of disordered states while not distinguishing between the different states that belong to this category the probability of the system to be in one of these states is greater than it to be in one of the states belonging to the category of ordered states because there are many more ways permutations to formally achieve one of the disordered states than there are ways permutations to formally achieve one of the ordered states The following explanation of the microscopic definition of entropy is partly taken from R ADAIR The Great Design PARTICLES FIELDS and CREATION 1987 pp 142 Let us imagine a billiard table with no side holes and three colored balls We further imagine that there is no friction whatsoever when the balls roll over the table and collide Say that we have a green black and red ball Further we have divided the table into three equal areas which we call upper middle and lower area We set the three balls in motion by importing energy to the system say by banging the underside of the table The balls will then roll around indefinitely because no energy is lost by friction We then take a set of pictures of the table and balls at random times and classify the pictures as to the configurations of the balls with respect to the three areas So we consider the distribution of the balls among the three areas not considering pictures where balls lie on the border between two areas and do not consider where in such an area a ball lies We only take into account in what area upper middle lower a ball of a certain color green black red lies at the moment a picture was taken Because we have three balls three colors and three areas there are 3 x 3 x 3 27 possible configurations of the three ball distributed among the three areas These configurations let themselves be classified into 10 configurational categories Category 1 In each area one ball Category 2 One ball in upper area two balls in middle area Category 3 One ball in upper area two balls in lower area Category 4 Two balls in upper area one ball in middle area Category 5 One ball in middle area two balls in lower area Category 6 Two balls in upper area one ball in lower area Category 7 Two balls in middle area one ball in lower area Category 8 Three balls in upper area Category 9 Three balls in middle area Category 10 Three balls in lower area Let us draw these 10 categories Figure above The ten configurational categories with respect to three balls and three areas by not accounting for colors we go from individual configurations to categories of configuration The First Category consists of six individual configurations which means that a configuration specified only as to belong to Category 1 can be formally made in six ways Figure above The six ways to formally construct a configuration specified only as to belong to Category 1 The Second Category consists of three individual configurations which means that a configuration specified only as to belong to Category 2 can be formally made in three ways Figure above The three ways to formally construct a configuration specified only as to belong to Category 2 The Third Category consists of three individual configurations which means that a configuration specified only as to belong to Category 3 can be formally made in three ways Figure above The three ways to formally construct a configuration specified only as to belong to Category 3 The Fourth Category consists of three individual configurations which means that a configuration specified only as to belong to Category 4 can be formally made in three ways Figure above The three ways to formally construct a configuration specified only as to belong to Category 4 The Fifth Category consists of three individual configurations which means that a configuration specified only as to belong to Category 5 can be formally made in three ways Figure above The three ways to formally construct a configuration specified only as to belong to Category 5 The Sixth Category consists of three individual configurations which means that a configuration specified only as to belong to Category 6 can be formally made in three ways Figure above The three ways to formally construct a configuration specified only as to belong to Category 6 The Seventh Category consists of three individual configurations which means that a configuration specified only as to belong to Category 7 can be formally made in three ways Figure above The three ways to formally construct a configuration specified only as to belong to Category 7 The Eighth Category consists of only one individual configuration which means that a configuration specified only as to belong to Category 8 can be formally made in one way only Figure above The one way to formally construct a configuration specified only as to belong to Category 8 The Ninth Category consists of only one individual configuration which means that a configuration specified only as to belong to Category 9 can be formally made in one way only Figure above The one way to formally construct a configuration specified only as to belong to Category 9 The Tenth Category finally consists of only one individual configuration which means that a configuration specified only as to belong to Category 10 can be formally made in one way only Figure above The one way to formally construct a configuration specified only as to belong to Category 10 So now we have shown all 27 configurations When the system is in equilibrium the balls are rolling about randomly because equilibrium means leveling out of differences absence of internal pattern maximum symmetry and each of the 27 configurations has the same probability of representing the spatial state of the system at any chosen moment This probability is

Original URL path: http://www.metafysica.nl/ontology/general_ontology_29m5a.html (2016-02-01)

Open archived version from archive - General Ontology XXIXm5b

say that at time t there are 20 marbles in box A and thus 80 in B First we look at box A where in the discussion we use a point it means times x except where it is evidently a decimal point The probability that a chosen marble turns out to be in A is 20 100 which is 1 5 So P A t 1 5 Then P A t P eq A is 1 5 1 2 2 5 log P A t P eq A is then log 2 5 0 92 In all our computations we use the natural logarithm P A t log P A t P eq A then is 1 5 0 92 0 18 Now we look at box B The probability that a chosen marble turns out to be in box B is 80 100 which is 4 5 So P B t 4 5 and thus indeed P A t P B t 1 which is a 100 probability Then P B t P eq B is 4 5 1 2 8 5 log P B t P eq B is then log 8 5 0 47 P B t log P B t P eq B then is 4 5 0 47 0 38 We must now add according to the summation sign in the formula the results of the two boxes 0 18 0 38 0 20 So the H value for time t is 0 20 When the system is in equilibrium P A t P B t P eq A P eq B and then log P A t P eq A log P B t P eq B log 1 0 Therefore P A t 0 i e P A t times zero is 0 And also P B t 0 0 And so P A t 0 P B t 0 0 So the H value is 0 when the system is in equilibrium Let s give a second example that is to say a different distribution of the 100 marbles between the two boxes A and B Say that at time t whatever that time is there are 10 marbles in box A and thus 90 in B Again P eq A P eq B 50 100 1 2 First we look at box A where in the discussion we use a point it means times x except where it is evidently a decimal point The probability that a chosen marble turns out to be in A is 10 100 which is 1 10 So P A t 1 10 Then P A t P eq A is 1 10 1 2 1 5 log P A t P eq A is then log 1 5 1 61 P A t log P A t P eq A then is 1 10 1 61 0 16 Now we look at box B The probability that a chosen marble turns out to be in box B is 90 100 which is 9 10 So P B t 9 10 and thus indeed P A t P B t 1 which is a 100 probability Then P B t P eq B is 9 10 1 2 9 5 log P B t P eq B is then log 9 5 0 59 P B t log P B t P eq B then is 9 10 0 59 0 53 We must now add according to the summation sign in the formula the results of the two boxes 0 16 0 53 0 37 So the H value for time t is 0 37 Let s give a third example that is to say yet another distribution of the 100 marbles between the two boxes A and B Say that at time t whatever that time is there are 0 marbles in box A and thus 100 in B Again P eq A P eq B 50 100 1 2 First we look at box A where in the discussion we use a point it means times x except where it is evidently a decimal point The probability that a chosen marble turns out to be in A is 0 100 which is 0 So P A t 0 Consequently P A t log P A t P eq A then is 0 Now we look at box B The probability that a chosen marble turns out to be in box B is 100 100 which is 1 So P B t 1 and thus indeed P A t P B t 1 which is a 100 probability Then P B t P eq B is 1 1 2 2 log P B t P eq B is then log 2 0 69 P B t log P B t P eq B then is 1 0 69 0 69 We must now add according to the summation sign in the formula the results of the two boxes 0 0 69 0 69 So the H value for time t is 0 69 Let s give a fourth and last example that is to say yet another distribution of the 100 marbles between the two boxes A and B Say that at time t whatever that time is there are 50 marbles in box A and thus 50 in B This is of course the equilibrium distribution Again P eq A P eq B 50 100 1 2 First we look at box A The probability that a chosen marble turns out to be in box A is 50 100 which is 1 2 So P A t 1 2 Then P A t P eq A is 1 2 1 2 1 log P A t P eq A is then log 1 0 P A t log P A t P eq A then is 1 2 0 0 Now we look at box B The probability that a chosen marble turns out to be in box B is 50 100 which is 1 2 So P B t 1 2 and thus indeed P A t P B t 1 which is a 100 probability Then P B t P eq B is 1 2 1 2 1 log P B t P eq B is then log 1 0 P B t log P B t P eq B then is 1 2 0 0 We must now add according to the summation sign in the formula the results of the two boxes 0 0 0 So the H value for time t which here is the equilibrium time is 0 Let us summarize these four results Distribution H value 0 100 0 69 10 90 0 37 20 80 0 20 50 50 0 We see that while the nivellation increases the H value decreases Its limit is 0 Finally we calculate the H value generally Total number of marbles is N There are n marbles in box A and consequently N n marbles in box B P eq A P eq B 1 2 N N 1 2 P A t n N P A t P eq A n N 1 2 2n N log P A t P eq A log 2n N P A t log P A t P eq A n N log 2n N P B t N n N 1 n N P B t P eq B 1 n N 1 2 2 2n N log P B t P eq B log 2 2n N P B t log P B t P eq B 1 n N log 2 2n N We must now add according to the summation sign in the formula the results of the two boxes n N log 2n N 1 n N log 2 2n N So the general H value for time t in the Ehrenfest model is n N log 2n N 1 n N log 2 2n N We will now compute H values for a more generalized system that is for a system consisting of more than two boxes Let us suppose eight boxes and for simplicity s sake eight particles Again each particle has the same probability of being chosen and we then see whether that particle is in the box in consideration It follows that the probability of finding the particle there is the number of particles present in that box divided by the total number of particles We then as in the Ehrenfest model take this particle out of its box and put it in another box randomly chosen It is clear that in this process the box containing the largest number of particles as compared with other boxes has the highest probability of particles being removed from it while the box containing the smallest number of particles has the lowest probability of particles being removed from it It is clear that this process ultimately results in a leveling out of the distribution of particles between the eight boxes Equilibrium is reached when each box contains the same number of particles in the present case that means that each box contains one particle So at equilibrium P eq A P eq B P eq C etc is equal to 1 8 where the eight boxes are labelled by letters The eight particles allow for many different distributions for example 6 particles in one box and 2 in another The definition of the H quantity i e its formula does not distinguish between where in the sequence of boxes A B C D E F G H the 2 particles are and where the 6 particles So for example the follwing distributions are equivalent It is just about the numerical diffusion of the number of particles representing the total When we compute the H values belonging to the corresponding distributions we again follow the formula given above that is to say we compute P X t where X is either A or B or C etc P eq X which is 1 8 P X t P eq X log P X t P eq X and P X t log P X t P eq X This result is to be obtained for each box The eight results will then be added together yielding the H value for time t When a point is used it means times x except where it is evidently a decimal point We will now consider some possible distributions of the eight particles between the eight boxes A B C D E F G H and compute the corresponding H values To see these computations click HERE The results of the computations are summarized in the following overview Figure above Some possible distributions diffusion states of eight particles between eight boxes and their corresponding H values We see that while the leveling out increases the H value decreases The next Figure is the same as the previous one but miniaturized to obtain a direct overview Figure above Same as previous Figure but miniaturized We can clearly see that when the distribution diffuses the H value decreases Strongly heterogeneous i e unequal distributions The less uniformity a given distribution of particles in a container has the more information it possesses The container can be imagined to be partitioned into a large number of separate regions without walls separating these regions i e the container is only mentally divided into those regions allowing to assess the degree of uniformity of the distribution of particles contained in it And the less uniformity there is the higher the corresponding H value Let us give two examples of such a high degree of heterogeneity low degree of uniformity or homogeneity Suppose we have a container which we mentally have divided into 10000 non overlapping regions and suppose there are 1000000 particles and that at time t they all are present in one such region k while there is none in the other regions Let us compute the H value pertaining to this distribution If each region were to contain 100 particles then we would have the equilibrium distribution So P eq k 100 1000000 1 10000 Region k P k t 1000000 1000000 1 P eq k 1 10000 P k t P eq k 1 1 10000 10000 log P k t P eq k log 10000 9 210 as always the natural logarithm P k t log P k t P eq k 1 9 210 9 210 For each other region r we have P r t 0 1000000 0 So P r t log P r t P eq r 0 log P r t P eq r 0 So the H value of this distribution 10 6 00000000000000000000000000000 00000000 ten thousand minus one zero s is 9 210 0 0 0 0 9 210 We should compare this value with the value 2 08 obtained earlier that pertains to the distribution 8 0 0 0 0 0 0 0 As a second example of a strongly heterogeneous distribution we could suppose that we have a container that is mentally divided into 1000000000 10 9 regions and that we have 10 11 particles and that at time t they all are present in one such region k while there is none in the other regions Let us compute the H value pertaining to this distribution If each region were to contain 100 particles then we would have the equilibrium distribution So P eq k 100 10 11 1 10 9 Region k P k t 10 11 10 11 1 P eq k 1 10 9 P k t P eq k 1 1 10 9 10 9 log P k t P eq k log 10 9 20 723 as always the natural logarithm P k t log P k t P eq k 1 20 723 20 723 For each other region r we have P r t 0 10 11 0 So P r t log P r t P eq r 0 log P r t P eq r 0 So the H value of this distribution 10 11 00000000000000000000000000000 00000000 10 9 minus one zero s is 20 723 0 0 0 0 20 723 We should compare this value with the value 9 210 pertaining to the previous distribution and the value 2 08 obtained earlier pertaining to the distribution 8 0 0 0 0 0 0 0 We see that a very low uniformity corresponds to a high H value and it is to be expected that distributions that tend to be infinitely heterogeneous have a H value that approaches to infinity Such distributions contain a large amount of information And as soon as this information becomes in the limit infinite they are not realizable in nature But it should be clear that already a distribution with a finite amount of information is not realizable when this amount exceeds a certain finite threshold There are other distributions we re still talking about spatial distributions that seem to be less uniform than certain others but have the same H value nevertheless Let us give a few examples Above we saw that the distribution 8 0 0 0 0 0 0 0 eight particles eight boxes all eight particles in one box box A a distribution present at time t can be represented by the H value 2 08 Let us calculate it explicitly Box A P A t 8 8 1 P eq A 1 8 P A t P eq A 1 1 8 8 log P A t P eq A log 8 2 08 P A t log P A t P eq A 1 x 2 08 2 08 Box B P B t 0 8 0 so P B t log P B t P eq B 0 The same goes for all the remaining boxes P C t 0 8 P D t 0 8 etc So the H value for this distribution is 2 08 0 0 0 0 0 0 0 2 08 We could wonder what is the case if we still had these eight boxes but now having say 1600 particles instead of 8 all of them in one box box A while none in the other seven boxes At first sight the distribution 8 0 0 0 0 0 0 0 seems to be much more uniform that is much more homogeneous than the distribution 1600 0 0 0 0 0 0 0 but in fact they have the same degree of non uniformity Let us calculate the H value of this last mentioned distribution a distribution present at time t Box A P A t 1600 1600 1 At equilibrium the 1600 particles are equally distributed between the eight boxes which means that then each box contains 1600 8 particles So P eq A 1600 8 1600 1 8 P A t P eq A 1 1 8 8 log P A t P eq A log 8 2 08 P A t log P A t P eq A 1 x 2 08 2 08 Box B P B t 0 1600 0 so P B t log P B t P eq B 0 The same goes for all the remaining boxes P C t 0 8 P D t 0 8 etc So the H value for this distribution is 2 08 0 0 0 0 0 0 0 2 08 So we see that both distributions have the same H value and it is now clear what this value actually tells us Although both distributions have the same H value the one looks more extreme in the sense of 1600 particles being concentrated in one box instead of only eight But if we look to their respective equilibrium distributions we see that in the case of a total of 1600 particles 200 are crowded in each box while only one in the case of a total of 8 particles 1 1 1 1 1 1 1 1 equilibrium distribution for 8 particles and eight boxes 200 200 200 200 200 200 200 200 equilibrium distribution for 1600 particles and eight boxes So the higher degree of crowdedness of the 1600 instead of only eight particles in one box at time t corresponds to the more crowdedness of 200 instead of only one particles in each box at the time of equilibrium And now it is clear that the transition from 1 1 1 1 1 1 1 1 to 8 0 0 0 0 0 0 0 reflects the same i n c r e a s e in the degree of crowding as we see in the transition from 200 200 200 200 200 200 200 200 to 1600 0 0 0 0 0 0 0 We see that the ratio of the number of particles present in each box at the time of equilibrium and the total number of particles that could be crowded up in one box is equal in all cases of eight boxes and whatever total number of particles For our two discussed cases these ratio s were 1 8 and 200 1600 respectively which are equal Generally when the number of boxes is k and the total number of particles N which total number can be crowded in one box we get Number of particles present in each box at the time of equilibrium is N k and the total number of particles is N so the just mentioned ratio is N k N 1 k We see that this ratio is independent of the total number of particles distributed between k boxes The course of the H function It can be shown that the H function which we obtain when plotting the H values pertaining to successive distributions of particles that become more and more uniform in time decreases in a uniform fashion in accordance with the Figure above This is why H plays the role of S i e minus S entropy The uniform decrease of H has a very simple meaning It measures the progressive uniformization of the system The initial information is lost and the system evolves from order to disorder PRIGOGINE STENGERS note that a Markov process as exemplified by the Ehrenfest model or by its generalizations implies fluctuations See Figure above If we would wait long enough we would recover the initial state However we are with respect to the H function dealing with averages The H M quantity that decreases uniformly is expressed in terms of probability distributions and not in terms of individual events It is the probability distribution that evolves irreversibly Therefore on the level of distribution functions Markov chains lead to a one wayness in time It is important to dwell a little longer on this point A generalized Ehrenfest process is a process of changing spatial distributions Such a distribution is a distribution of particles of a given set between imaginary subdivisions of a container There are a definite number of these subdivisions boxes and a definite number of particles Further we assume a starting state which represents a clearly in homogeneous distribution of the particles between these boxes Now every say second we take a particle at random from one of the boxes and put it into another box This changes the distribution of the particles The just mentioned at random means that each particle of the total set has exactly the same chance of being taken and then transferred to another box This implies that in the case of a box containing relatively many particles there is a correspondingly high probability that a particle is being taken from it and transferred to another box as compared to boxes containing only a relatively small number of particles When this process proceeds the system will with small ups and downs approach the equilibrium distribution in which every box contains the same number of particles This is clearly a model for the diffusion of a diluted gas through air i e a gas say chlorine that initially found itself localized in a certain part of an air filled container diffuses through the air of the container until it becomes evenly spread throughout the volume of this container If we follow this diffusion process we see a succession of different distributions This succession generally tends to go to an equilibrium distribution But because the choice of taking a particle and tranferring it to another box is random it can happen that the course to the equilibrium distribution is not smooth From a given distribution there could follow one or more distributions that are farther away from the equilibrium distribution than was the initial distribution And these can be followed by other distributions which are closer again to the equilibrium distribution Of course such a process seen in this way cannot carry the arrow of time because the latter is supposed to be a continuous flow without hops and bumps How then can the arrow of time so evident at the macroscopic level be detected at the microscopic level of diffusing particles Or equivalently can we detect some quantity that smoothly and uniformly changes during this diffusion process Indeed Boltzmann found such a quantity the H quantity This quantity smoothly and uniformly decreases as time goes by during the diffusion process It is the smooth and uniform H function But why is this function smooth and uniform At first sight we would expect it not to be uniform Each distribution of a number of particles between boxes corresponds to a definite H value So when we follow the actual succession of distributions taking place in a diffusion process and as has been said taking place with ups and downs the sequence of corresponding H values will certainly not represent a smooth and uniform succession This reasoning is however false The H values are computed from probabilities not from individual events i e it is about averages The quantities P k t and P eq k which determine the H value of a given distribution are probabilities P k t is the probability that at time t a particle is taken from box k and transferred to another box Say that this probability is 1 8 This does not mean that if we repeat the action of taking a particle at random eight times while every time putting the particle back into box k if it was taken therefrom that in one such repeat the particle was taken from box k while in the seven other repeats the particle was taken from some other box On the contrary it could happen that in none of these eight repeats the first included a particle was taken from box k or that in say three of the eight repeats a particle was taken from that box So what then does P k t 1 8 mean Well it means that if we repeat the action of taking a particle at random many times while every time putting the particle back into box k if it was taken therefrom then the ratio of the total number of particles actually taken from box k and the total number of repeats including the initial action approaches 1 8 So when P k t 1 8 there is a 1 8 chance that a particle is taken from box k at time t In a large number of repeats approximately 1 8 of them will consist in taking a particle from box k In fact this means that when we involve the H function and thus involve probabilities we presuppose that we repeat the diffusion process i e the sequence of distributions many times and then take the average And now it is clear that a great many of such repeats each containing irregularities will when superimposed upon each other give a smooth and uniform succession And such a succession is expressed by the H function Why equilibrium lies in the future The one wayness of Time Having considered 1 partitions associated with the Baker transformation 2 contracting and dilating fibers in that same Baker transformation 3 Markov chains and 4 the H function we can now present a possible explanation based on PRIGOGINE STENGERS 1984 of why experience always shows macroscopic processes to go to equilibrium in the f u t u r e and not in the past Let us recapitulate some considerations made earlier where we introduced contracting and dilating fibers in the Baker transformation Earlier we gave a Figure showing the evolution of these fibers when going forward and can read backwards in time and commented on these fibers as follows Let us concentrate on the precise difference between contracting and dilating fibers See Figure above A system as unstable as the baker transformation is a system of scattering hards spheres Here contracting and dilating fibers have a simple physical interpretation A contracting fiber corresponds to a collection of hard spheres whose velocities expressing speed and direction are randomly distributed in the far distant past equilibrium in the past and all velocities become parallel in the far distant future one point in phase space A dilating fiber corresponds to the inverse situation in which we start with parallel velocities one point in phase space and go to a random distribution of velocities equilibrium in the future The exclusion of the contracting fibers corresponds to the experimental and observational fact that whatever the ingenuity of the experimenter or the skill of the observer he will never be able to control or observe the system to produce parallel velocities after an arbitrary number of collisions So in a way the Second Law of Thermodynamics Law of ever increasing entropy acts as a selection principle excluding the contracting fibers and thus only admits systems that go to equilibrium in the future provided they can go their business unimpededly i e spontaneously Once we exclude contracting fibers we are left with only one of the two possible Markov chains In other words the Second Law becomes a selection principle of initial conditions one such condition is a contracting fiber while the other is a dilating fiber Only initial conditions that go to equilibrium in the future are retained Now we continue this discussion in order to explain why some initial conditions are allowed by the Second Law and others prohibited A contracting fiber and a dilating fiber correspond to two realizations of dynamics each involving symmetry breaking and appearing in pairs PRIGOGINE STENGERS p 275 of the 1985 Flamingo edition These two realizations can be seen as two solutions both and each for itself satisfying some dynamic equation Insofar as we have these two solutions symmetry is not broken But as only one of them is realized as the actual outcome of a real world process symmetry is broken The contracting fiber corresponds to equilibrium in the far distant past the dilating fiber to equilibrium in the future We therefore have two Markov chains oriented in opposite time directions And one of these Markov chains is excluded by the Second Law resulting in one irreversible process How is this conclusion compatible with dynamics In dynamics information is conserved while in Markov chains information is lost and entropy therefore increases There is however no contradiction Ibid p 276 When we go from the dynamic description of the Baker transformation to the thermodynamic description we have to modify our distribution function The objects in terms of which entropy increases are different from the ones considered in dynamics The new distribution function corresponds to an intrinsically time oriented description of the dynamic system Ibid p 277 An infinite entropy barrier separates possible initial conditions from prohibited ones Because this barrier is infinite it cannot be overcome The result is an irreversible process We have to abandon the hope that one day we will be able to travel back into our past To understand the origin of this barrier we return to the expression of the H quantity as it appears in the theory of Markov chains as given above We have seen that to each distribution we can associate a number the corresponding value of H We can say that to each distribution corresponds a well defined information content The higher the information content the more difficult it will be to realize the corresponding state What we are about to show here is that the initial distribution prohibited by the Second Law would have an infinite information content That is the reason why we can neither realize such a distribution nor find it in nature Let us first come back to the meaning of H as presented earlier We have to subdivide the relevant phase space into sectors or boxes With each box k we associate a probability P eq k at equilibrium as well as a non equilibrium probability P k t The H is a measure of the difference between P k t and P eq k and vanishes at equilibrium when this difference disappears i e when P k t P eq k P k t P eq k 1 log P k t P eq k log 1 0 And because at equilibrium all boxes have this value the H value is 0 0 0 0 Therefore to compare the Baker transformation with Markov chains we have to make more precise the corresponding choice of boxes For this we again give the Figure showing the generating partition and some of the basic partitions of the Baker transformation phase space Figure above Baker transformation applied from time 0 three times forward and two times backward The black and white areas can be considered to represent partitions of the phase space And the partition pertaining to time 0 will be called the generating partition or standard partition The remaining partitions also those beyond the ones drawn are basic partitions Suppose we consider a system at time 2 see Figure above and suppose that this system originated at time t i Then a result of dynamical theory is that the boxes correspond to all possible intersections among the partitions between time t i and t 2 If we now consider the Figure above we see that when t i is receding towards the past which means that we consider the system as it shows itself at time 2 being older and older the boxes will become steadily thinner as we have to introduce more and more vertical subdivisions This is expressed in the following Figure where the arrows signify the direction from past to present We see indeed that the number of boxes increases in this way from 4 to 32 Once we have the boxes we can compare the non equilibrium probability with the equilibrium probability for each box i e assess these probabilities and thus being able to compute the H value associated with that particular non equilibrium distribution In the present case the non equilibrium distribution is either a dilating fiber Sequence A in the next Figure or a contracting fiber Sequence C in next Figure Figure above Dilating sequence A and contracting sequence C fibers cross various numbers of the boxes which subdivide a Baker transformation phase space All squares on a given sequence refer to the same time t 2 but the number of boxes subdividing each square depends on the initial time t i of the system i e the number of boxes depends on how far back into the past the origin of the system lies The fiber red as drawn in both sequences is supposed to represent where in phase space at time 2 the system might be Here in each case contracting fiber dilating fiber the where refers to only one coordinate with respect to the contracting fiber it is the horizontal coordinate whereas with respect to the dilating fiber it is the vertical coordinate The important point to notice is that when t i is receding to the past i e the system seen at time 2 is considered to be older and older the d i l a t i n g fiber occupies an increasing large number of boxes for t i 1 it occupies one box for t i 0 it occupies 2 boxes for t i 1 it occupies 4 boxes for t i 2 it occupies 8 boxes and so on whereas the c o n t r a c t i n g fiber occupies 4 boxes for all t i s Now we are able to assess H values for the possible distributions of the system among the boxes of phase space But before we do this we first give some preparations Recall that if we have a number of boxes and a number of marbles divided among these boxes we get possible distributions of these marbles and for each distribution we can compute the H value associated with it according to the formula where log is the natural logarithm often written as ln If we have say 12 boxes A B C D E F G H I J K L and say 12 marbles such that each of the first four boxes that is A B C and D contains three marbles while the other boxes containe none then we can compute the H value associated with this particular distribution See next Figure Figure above Top The described distribution of 12 marbles between 12 boxes Bottom The distribution when the system is in equilibrium To compute the H value we must first calculate the probability of finding a marble in a given box This probability means the following From the 12 marbles we choose one that is to say we have one particular marble in mind Now we assess the probability of finding this particular

Original URL path: http://www.metafysica.nl/ontology/general_ontology_29m5b.html (2016-02-01)

Open archived version from archive - General Ontology XXIXm5c

give them here again No heat engine reversible or irreversible operating in a cycle can take in thermal energy from its surroundings and convert all this thermal energy to work WEIDNER Physics 1989 p 473 No heat pump reversible or irreversible operating over a cycle can transfer thermal energy from a low temperature reservoir to a higher temperature reservoir without having work done on it WEIDNER Physics 1989 p 474 The total entropy of any real world system always increases with time WEIDNER Physics 1989 p 483 Now we are ready to discuss the above mentioned two ways in which entropy can manifest itself and which distinction plays a role in characterizing the three types or stages of thermodynamic systems Equilibrium thermodynamic systems non equilibrium linear thermodynamic systems and non equilibrium non linear thermodynamic systems truly dissipative systems Entropy flow and Entropy production This distinction makes sense as soon as we consider real world thermodynamic systems Let us following PRIGOGINE STENGERS Order out of Chaos pp 118 of the 1986 Flamingo edition consider the variation of the entropy dS over a short time interval dt The situation is quite different for ideal and real engines In the first case dS may be expressed completely in terms of the exchanges between the engine and its environment We can set up experiments in which heat is given up by the system instead of flowing into the system The corresponding change in entropy would simply have its sign changed This kind of contribution to entropy which we shall call d e S is therefore reversible in the sense that it can have either a positive or a negative sign The situation is drastically different in a real engine Here in addition to reversible exchanges we have irreversible processes inside the system such as heat losses friction and so on These produce an entropy increase or entropy production inside the system The increase of entropy which we shall call d i S cannot change its sign through a reversal of the heat exchange with the outside world Like all irreversible processes such as heat conduction entropy production always proceeds in the same direction The variation is monotonous entropy production cannot change its sign as time goes on The notations d e S and d i S have been chosen to remind the reader that the first term refers to exchanges e with the outside world that is with entropy flow while the second refers to the irreversible processes inside i the system that is with entropy production The entropy variation dS is therefore the sum of the two terms d e S and d i S which have quite different physical meanings To grasp the peculiar feature of this decomposition of entropy variation into two parts it is useful first to apply it to something to which it cannot be meaningfully applied for instance to energy and then to apply it to something else than entropy variation where it can be applied meaningfully namely for instance the quantity of hydrogen in some given container So let us apply an analogous decomposition to energy and let us denote energy by E and variation over a short time dt by dE Of course we would still write that dE is equal to the sum of a term d e E due to the exchanges of energy and a term d i E linked to the internal production of energy However the principle of conservation of energy states that energy is never produced but only transferred from one place to another The variation in energy dE is then reduced to d e E And because this is always so the decomposition of dE into the two terms mentioned is not very useful On the other hand if we take a non conserved quantity such as the quantity of hydrogen molecules contained in a vessel this quantity may vary both as a result of adding hydrogen to the vessel hydrogen flow into the system or through chemical reactions occurring inside the vessel But in this case the sign of the production is not determined as it is in entropy production Depending on the circumstances we can produce or destroy hydrogen molecules by detaching hydrogen from compounds that were in the vessel or by letting hydrogen being taken up in some compound by a chemical reaction The peculiar feature of the Second Law is the fact that the production term d i S is always positive The production of entropy expresses the occurrence of irreversible changes inside the system If we leave the Carnot cycle and consider other thermodynamic systems the distinction between entropy flow and entropy production can still be made For an isolated system that has no exchanges with its environment the entropy flow is by definition zero Only the production term remains and the system s entropy can only increase or remain constant Increasing entropy corresponds to the spontaneous evolution of the system Increasing entropy is no longer synonymous with loss but now refers to the natural processes within the system These are the processes that ultimately lead the system to thermodynamic equilibrium corresponding to the state of maximum entropy Reversible transformations belong to classical science in the sense that they define the possibility of acting on a system of controlling it The purely dynamic object could be controlled through its initial conditions Similarly when defined in terms of its reversible transformations that is considering the thermodynamic system as being ideal meaning no friction and the like the thermodynamic object may be controlled through its boundary conditions These are about the system s relations to its environment such as ambient temperature insulatedness or openess the system being maintained in a certain volume and under a certain pressure and so determine the relevant thermodynamic potential entropy free energy or other such potentials and with it the latter s extreme maximum entropy minimum free energy and the like and consequently determine the system s attractor Any system in thermodynamic equilibrium whose temperature volume or pressure are g r a d u a l l y changed passes throug a series of equilibrium states PRIGOGINE STENGERS Ibid p 120 and any reversal of the manipulation leads to a return to its initial state The defined reversible nature of such change and controlling the object through its boundary conditions are interdependent processes In this context irreversibility is negative It appears in the form of uncontrolled changes that occur as soon as the system eludes control But inversely irreversible processes may be considered as the last remnants of the spontaneous and intrinsic activity displayed by nature when experimental devices are employed to harness it Thus the negative property of dissipation shows that unlike purely dynamic objects thermodynamic objects can only be partially controlled Occasionaly they break loose into spontaneous change For a thermodynamic system not every change is equivalent to every other change This is the meaning of the expression dS d e S d i S Spontaneous change d i S toward equilibrium is different from the change d e S which is determined and controlled by a modification of the boundary conditions for example ambient temperature For an isolated system equilibrium appears as an attractor of non equilibrium states PRIGOGINE STENGERS Ibid p 120 The initial assertion all changes are not equivalent may thus be generalized by saying that evolution toward an attractor state involving d i S differs from all other changes especially from changes determined by boundary conditions and thus involving d e S Max Planck as PRIGOGINE STENGERS report often emphasized the difference between the two types of change found in nature Nature wrote Planck seems to favor certain states The irreversible increase in entropy d i S dt entropy change in time describes a system s approach to a state which attracts it which the system prefers and from which it will not move volontarily From this point of view Nature does not permit processes whose final states she finds less attractive than their initial states Reversible processes are limiting cases In them Nature has an equal propensity for initial and final states This is why the passage between them can be made in both directions quoted by PRIGOGINE STENGERS How foreign such language sounds when compared with the language of dynamics say PRIGOGINE STENGERS In dynamics a system changes according to a trajectory that is given once and for all whose starting point is never forgotten since initial conditions determine the trajectory for all time However in an isolated thermodynamic system all non equilibrium situations produce evolution toward the same kind of equilibrium state By the time equilibrium has been reached the system has forgotten its initial conditions that is the way it had been prepared Thus say specific heat or the compressibility of a system in equilibrium are properties independent of the way the system has been set up This fortunate circumstance greatly simplifies the study of the physical states of matter Indeed complex systems consist of an immense number of particles From the dynamic standpoint it is in the case of systems with many particles practically impossible to reproduce any state of such systems in view of the infinite variety of dynamic states that may occur Were knowledge of the system s history indispensable for understanding the mentioned properties we would hardly be able to study them We are now confronted with two basically different descriptions dynamics which applies to the world of motion and thermodynamics the science of complex systems with its intrinsic direction of evolution toward increasing entropy This dichotomy immediately raises the question of how these descriptions are related a problem that has been debated since the laws of thermodynamics were formulated It is the question thus of how the formulations of thermodynamics can be reconciled with those of dynamics We will find a direction to its answer in the order principle of Boltzmann We have discussed this principle in the Section Microscopic consideration of entropy in Part XXIX Sequel 28 Boltzmann s results signify that irreversible thermodynamic change is a change toward states of increasing probability and that the attractor state is a macroscopic state corresponding to maximum probability that is not a particular individual state but a category or type of states as such macroscopically recognizable as a category of distribution of places velocities and so on The category with the highest number of complexions number of permutations or number of ways such a category can be theoretically achieved and giving the relative probability of such a category to be realized has the highest probability to represent the state of the system The distribution category corresponds to a macroscopic state whereas an individual distribution i e any particular distribution corresponds to a microscopic state All this takes us far beyond Newton For the first time a physical concept has been explained in terms of probability Its utility is immediately apparent Probability can adequately explain a system s forgetting of all initial dissymmetry that is low symmetry of all special distributions for example the set of particles concentrated in a subregion of the system and later to diffuse all over the system or the distribution of velocities that is created when two gases of different temperatures are mixed and which dissymmetric distribution will eventually become homogenized This forgetting is possible because whatever the evolution peculiar to the system it will ultimately lead to one of the microscopic states corresponding to the macroscopic state distribution category of disorder and maximum symmetry that is homogeneity since such a macroscopic state corresponds to the overwhelming majority of possible microscopic states that is to say such a macroscopic state can as a category be achieved in an enormous number of different ways where each way represents an individual distribution pertaining to that category Once this state has been reached the system will move only short distances from the state and for short periods of time In other words the system will merely fluctuate around the attractor state Boltzmann s order principle implies that the most probable state available to the system is the one in which the multitude of events taking place simultaneously in the system compensates for one another statistically In the case of a gas initially present in only one half of a container or whatever the initial distribution of it over the two halves of the container was the system s evolution will ultimately lead it to the equal distribution N 1 N 2 number of particles in the left half equals the number in the right half of the container This state will put an end to the system s irreversible macroscopic evolution Of course the particles will go on moving from one half to the other but on the average at any given instant as many will be going in one direction as in the other And this is just the same as to say that the many possible individual distributions belonging to the distribution category characterized by the presence of just equal numbers of particles in the two halves of the container alternate among one another As a result the motion of the particles will cause only small short lived fluctuations around the equilibrium state N 1 N 2 Boltzmann s probabilistic interpretation thus makes it possible to understand the specificity of the attractor studied by equilibrium thermodynamics Let us now proceed our course toward non equilibrium thermodynamics In the kinetics involved in thermodynamic processes we must consider the rates of irreversible processes such as heat transfer and the diffusion of matter The rates we could say the speed of irreversible processes are also called fluxes and are denoted by the symbol J The thermodynamics of irreversible processes introduces a second type of quantity in addition to the rates or fluxes J it uses generalized forces X that cause the fluxes The simplest example is that of heat conduction Fourier s law tells us that the heat flux J is proportional to the temperature gradient This temperature gradient is the force causing the heat flux By definition flux and forces both vanish at thermal equilibrium The production of entropy P d i S dt can be calculated from the flux and the forces As could be expected all irreversible processes have their share in entropy production Each process participates in this production through the product of its rate or flux J multiplied by the corresponding force X The total entropy production per unit time i e the rate of change of entropy in time P d i S dt is the sum of these contributions Each of them appears through the product JX We can divide Thermodynamics into three large fields the study of which corresponds to three successive stages in its development Entropy production the fluxes and the forces are all zero at equilibrium In the close to equilibrium region where thermodynamic forces are weak the rates J k are linear functions of the forces The third field is called the non linear region since in it the rates are in general more complicated functions of the forces Let us first emphasize some general features of linear thermodynamics that therefore apply to close to equilibrium situations We ll do this by means of an instructive example thermodiffusion See next Figure We have two closed vessels connected by a tube and filled with a mixture of two different gasses for example hydrogen and oxygen We start with an equilibrium situation The two vessels have the same temperature and pressure and contain the same homogeneous gas mixture Now we apply a temperature difference T h T c between the two vessels The deviation from equilibrium as a result of this temperature difference can only be maintained when the temperature difference is maintained because if we do nothing the whole system will end up with the same temperature all over again lying between the two initial temperatures as a result of heat conduction that is the system ends up at equilibrium again So we need a constant heat flux which compensates the effects of thermal diffusion one vessel is constantly heated Q h in while the other is cooled Q c out So the system is constantly subjected to a thermodynamic force namely the thermal gradient The experiment shows that in connection with the thermal diffusion heat conduction from one vessel to the other a process appears in which the two gasses become separated When the system has reached its stationary state in which at a given heat flux in and out of the system the temperature difference remains the same as time goes on then more hydrogen will be present in the warmer vessel and more oxygen in the cold vessel That is to say a concentration gradient is the result The difference in concentration i e the degree of separation is proportional to the temperature difference So the thermodynamic force which here was the temperature gradient has effected a concentration gradient where initially the concentration was uniform As long as the temperature difference is maintained the separation will be maintained In fact according to PRIGOGINE 1980 p 87 the system has two forces namely X k corresponding to the difference in temperature between the two vessels and X m corresponding to the chemical potential which I presume to be in the present case the concentration gradient between the two vessels and the system has two corresponding flows fluxes J k heat flow and J m matter flow The system reaches a state in which the transport of matter vanishes whereas the transport of energy between the two phases vessels at different temperatures continues that is a steady non equilibrium state Such a state should not be confused with an equilibrium state characterized by a zero entropy production It cannot be denied that the transformation from a homogeneous distribution of the two gasses to a separation of them means an increase in the order of the system But this is so because the system is open and is constantly being subjected to the mentioned thermodynamic force And as long as this force temperature gradient is moderate it will be proportional to its effect concentration gradient that is to say the system is then linear or equivalently it finds itself in the linear region When the force is doubled so is its effect when it is tripled so its effect and so on We see that in this case the activity which produces entropy the diffusion of heat all over the system cannot simply be identified by the leveling out of differences Surely the heat flux from one vessel to the other plays this role but the process of separation of the two gasses that originates by virtue of the coupling with the thermal diffusion within the system is itself a process by which a difference is created an anti diffusion process which provides a negative contribution to the entropy production This simple example of thermodiffusion shows how important it is to abandon the idea that activity in which entropy is produced heat diffusion within the system is equivalent to degradation that is with the leveling out of differences For although it is true that we have to pay an entropy fee for the maintenance of the stationary state of the thermodiffusion process it is also true that that state corresponds with the creation of order In such a non equilibrium thermodynamic process a new view is possible We can evaluate the disorder that has originated as a result of the maintenance of the stationary state as that which makes possible the creation of a form of order namely a difference in chemical composition in the two vessels Order and disorder are here not opposed to each other but are inextricably connected The next Figure shows how it is with the entropy fee In order to maintain the temperature difference and thus to maintain the system s stationary state of separated gasses the relevant part of the environment which initially enjoyed some temperature difference looses this difference Figure above Thermodiffusion The entropy fee payed by the environment to create local order The thermodiffusion process reminds us a little of the Carnot process discussed in Part XXIX Sequel 28 but it is in fact quite different despite the fact that both involve a permanent difference which is so to say the motor of the production of order that is work in the Carnot process and chemical separation in the thermodiffusion process like a difference in water level can drive a turbine Let us compare the two The Carnot process has the following properties Because heat energy is added part of it converted in work and the rest being exported there is a constant flow of heat and one would therefore at first sight expect that the system is not in equilibrium But because see directly below the system is reversible it must be in equilibrium A reversible process must in general consist of a succession of infinitesimal changes taking place slowly so that at each stage the system is in thermodynamic equilibrium Then and only then can a temperature be defined for all intermediate stages WEIDNER Physics 1989 p 470 So the system passes from one equilibrium state to another as it proceeds through a cycle meaning that the Carnot process takes place under thermodynamic equilibrium The system and every segment of it is reversible it can be displayed by continuous lines in a pV diagram This is because there is no direct contact between the hot source and the cold source The two isothermal processes isothermal expansion and isothermal compression where heat exchanges take place to maintain the temperature are separated by two adiabatic processes adiabatic expansion and adiabatic compression where thus heat exchanges are made impossible by applying insulation as a result of which the temperature can change So in the ideal Carnot engine there is no heat conduction from the hot source to the cold source and therefore there is no irreversible segment present in the ideal engine where ideal means no friction and no heat losses as a result of the warming up of the whole engine Entropy production is zero while there is entropy flow The thermodiffusion process has the following properties Because a thermal difference is constantly being maintained in the system not as a result of a spontaneous process of the system but as a result of applying heat to one vessel and of cooling the other and because in addition the process is irreversible see directly below the system is not in equilibrium It only would end up at equilibrium when the heating of one vessel and the cooling of the other had been stopped and as a result of which the temperature had become uniform all over the system The process is irreversible because thermal conduction from one vessel to the other is involved and thermal conduction is itself irreversible Entropy production is minimal Because the boundary conditions of this system thermodiffusion is such that it is prevented to reach equilibrium where the entropy production is zero the system settles down in the state of least dissipation that is of minimal entropy production according to PRIGOGINE STENGERS 1984 and 1988 and PRIGOGINE 1980 Linear non equilibrium thermodynamics of irreversible processes is dominated by two important results namely the Onsager reciprocity relations and the theorem of minimal entropy production Let us first discuss the Onsager reciprocity relations In 1931 Lars Onsager discovered the first general relations in non equilibrium thermodynamics for the linear near to equilibrium region We will explain not prove these relations by using the just discussed thermodiffusion experiment This experiment in fact consists of two processes which we shall call Process 1 and Process 2 Process 1 consists of Force 1 temperature gradient effecting Flux 1 heat flow from one vessel to the other Process 2 consists of Force 2 concentration gradient effecting Flux 2 diffusion Process 1 temperature gradient heat flow proceeds according to Fourier s law which is an empirical law for heat conduction The flow of heat is proportional to the gradient of temperature Process 2 concentration gradient diffusion matter flow proceeds according to Fick s law which also is empirical for diffusion The flow of diffusion is proportional to the gradient of concentration All this is visualized in the next diagram Figure above The two processes 1 and 2 the two forces 1 and 2 an the two fluxes 1 and 2 in the thermodiffusion experiment With the help of the above Figure we can now state the Onsager relations When the Flux 1 corresponding to the irreversible Process 1 is influenced by the Force 2 of the irreversible Process 2 then the Flux 2 is also influenced by the Force 1 through the same coefficient L 12 that is L 12 L 21 PRIGOGINE From Being to Becoming 1980 p 86 This holds for all linear non equilibrium systems for example in each case where a thermal gradient induces a process of diffusion of matter we find that an applied concentration gradient to that system can set up a heat flux through the system The importance of the Onsager relations resides in their generality They have been submitted to many experimental tests Their validity showed for the first time that non equilibrium thermodynamics leads as does equilibrium thermodynamics to general results independent of any specific molecular model It is immaterial for instance whether the irreversible processes take place in a gaseous liquid or solid medium This is the feature that makes the Onsager relations a thermodynamic result What follows now is taken not necessarily as a quote from PRIGOGINE STENGERS Order out of Chaos pp 138 of the 1986 Flamingo Edition with some comments of mine between square brackets Reciprocity relations have been the first results in the thermodynamics of irreversible processes to indicate that this was not some ill defined no man s land but a worthwhile subject of study whose fertility could be compared with that of equilibrium thermodynamics Equilibrium thermodynamics was an achievement of the nineteenth century non equilibrium thermodynamics was developed in the twentieth century and Onsager s relations mark a crucial point in the shift of interest away from equilibrium thermodynamics toward non equilibrium thermodynamics A second general result in this field of linear non equilibrium thermodynamics is that of mimimal entropy production We have already spoken of thermodynamic potentials whose extrema correspond to the states of equilibrium toward which thermodynamic evolution tends irreversibly such as the leveling out of temperature by heat conduction Such are the entropy S for isolated systems and the free energy F for open systems at a given temperature that is at a constant temperature The thermodynamics of close to equilibrium systems also introduces such a potential function It is quite remarkable that this potential is the entropy production P itself The theorem of minimum entropy production does in fact show that in the range of validity of Onsager s relations that is the linear region a system evolves toward a stationary state characterized by the minimum entropy production which is thus the extreme of the entropy production potential compatible withe constraints imposed upon the system These constraints are determined by the boundary conditions They may for instance correspond to two points in the system kept at different temperatures or to a flux of matter that continuously supports a reaction and eliminates i e exports its products The stationary state toward which the system evolves is then necessarily a non equilibrium state at which dissipative processes with non vanishing rates occur But since it is a stationary state all the quantities that describe the system such as temperature concentrations and so on become time independent Similarly the entropy of the system now becomes independent of time Therefore its time variation vanishes dS 0 But we have seen above that the time variation of entropy is made up of two terms the entropy flow d e S and the positive entropy production d i S Therefore dS 0 implies that d e S d i S 0 that is because d i S is positive and dS d e S d i S 0 d e S must be the same in absolute value as is d i S but with opposite sign The heat or matter flux coming from the environment determines a negative flow of entropy d e S which is however matched by the entropy production d i S due to irreversible processes inside the system A negative flux d e S means that the system transfers entropy to the outside world Therefore at the stationary state the system s activity continuously increases the entropy of its environment See Figure farther above This is true for all stationary states But the theorem of minimum entropy production says more The particular stationary state toward which the system tends is the one in which this transfer of entropy to the environment is as small as is compatible with the imposed boundary conditions In this context the equilibrium state corresponds to the special case that occurs when the boundary conditions allow a vanishing entropy production In other words the theory of minimum entropy production expresses a kind of inertia When the boundary conditions prevent the system from going to equilibrium it does the next best thing It goes to a state of minimum entropy production that is to a state as close to equilibrium as possible Linear thermodynamics thus describes processes that spontaneously tend toward the extreme of some thermodynamic potential which potential here is entropy production while its extreme is minimal entropy production and therefore the stationary state is like the equilibrium state of equilibrium thermodynamics stable So linear thermodynamics describes the stable predictable behavior of systems tending toward the minimum level of activity compatible with the fluxes that feed them The fact that linear thermodynamics like equilibrium thermodynamics may be described in terms of a potential the entropy production implies that both in evolution toward equilibrium and in evolution toward a stationary state initial conditions are forgotten Whatever the initial conditions the system will finally reach the state determined by the imposed boundary conditions which in turn determine the kind of relevant thermodynamic potential As a result the response of such a system to any change in its boundary conditions for example not too high an increase of the applied difference of temperature is entirely predictable We see that in the linear range that is close to equilibrium the situation remains basically the same as at equilibrium Although the entropy production does not vanish as it does at equilibrium neither does it prevent the irreversible change from being identified as an evolution toward a state that is wholly deducible from general laws This becoming inescapably leads to the destruction of any difference any specificity That is to say just like in equilibrium thermodynamics from whatever initial condition possessing whatever differences and specificities the system leaves it all behind and evolves toward a general attractor state determined by the extremum of its thermodynamic potential Carnot or Darwin The question remains There is still no connection between the appearance of natural organized forms on one side and on the other the tendency toward forgetting of initial conditions along with the resulting disorganization PRIGOGINE STENGERS Ibid p 140 One should not forget however that in close to equilibrium systems local order can appear as seen in thermodiffusion But the stationary state is like the equilibrium state stable and predictable and thus has no potential to venture into new territory As we will see all that changes when the system is driven far from equilibrium and as a result enters the non linear domain Crystallization After having discussed general equilibrium thermodynamics in Part XXIX Sequel 28 inserted in the discussion of models of unstable systems in order to have a good understanding of entropy and energy and after having discussed general close to equilibrium thermodynamics present document it is now time to discuss at least in a preliminary way the thermodynamic status of plain crystallization i e the formation of plain crystals not of branched crystals It is not easy to assess this status even after having considered so much general thermodynamics So the following discussuion is especially because the author of this website is not a professional physicist or chemist is indeed only preliminary and not in all respects certain of free of inconsistencies Crystals can appear either in solution or in a melt or directly from a vapor Like in thermodiffusion which is a close to equilibrium process in crystallization local order emerges as a result of an imposed fall that is of an imposed difference which is as such a thermodynamic force and which causes a flux of some kind and where the force and the flux are supposed to be linearly related In the case of thermodiffusion the imposed force is a difference in temperature And this force is applied to an initially homogeneous system that is a system in which two gases are uniformly distributed over the system As a result of the application of the force the uniform distribution of the gases becomes unstable the gases separate and thus local order emerges This order endures as long as the force is applied Entropy is given off to the environment and as long as the force is not too great that is as long as the system remains linear the entropy production and its transfer to the environment will be minimal i e it will be as low as the boundary conditions permit When the force is no longer applied the system becomes an equilibrium thermodynamic system The temperature becomes uniform the force is zero and so are the fluxes that were ultimately caused by it The separation of the two gases will become undone In order to induce plain crystallization from a solution we must impose a force of concentration The solution must be moderately supersaturated Here then we have created a fall supersaturation saturation When a solution is made supersaturated with respect to some solute it means that the solution that is the uniform distribution of the solute in the solvent becomes unstable Above a certain size a crystal embryo that had emerged by chance in the solution will lower the free energy when it grows larger so it will spontaneously grow larger So a crystal is growing in the solution And if the supersaturation level is maintained for instance by letting the solvent slowly evaporate during crystallization the crystal will keep on growing As long as this is the case we have I think to do with a close to equilibrium system In contrast to equilibrium systems it creates local order The growing crystal is never in complete equilibrium because of the ever present surface energy caused by dangling chemical valences or distorted or disconnected bonds and as long as the mentioned level of difference is kept the growing crystal exports entropy to the environment randomizing this environment instead of itself For crystallization directly from a vapor as it always occurs for instance in the case of snow crystal formation See Part XXIX Sequel 13 about phase transitions the imposed fall also consists of a distance here in fact of two distances differences namely 1 the distance between moderate supersaturation and saturation or even undersaturation In the case of snow very high levels of supersaturation that is very high humidities can result in branched crystals whereas moderate or low humidities moderate or low supersaturation of air with water vapor or even undersaturation generally create plain stable crystals having the form of hexagonal prisms and 2 a distance difference in temperature supercooling temperature melting point temperature Also here order is created and entropy is exported to the environment For snow crystals which always originate directly from water vapor in the air one has constructed a morphology diagram that displays

Original URL path: http://www.metafysica.nl/ontology/general_ontology_29m5c.html (2016-02-01)

Open archived version from archive - General Ontology XXIXm5d

releasing the average kinetic energy of the particles which is actual energy and consequently increasing the bulk temperature Thus all in all the conversion of the potential energy heat of fusion that as such did not increase the temperature during melting but was used to randomize the configuration of particles into active heat generally decelerates the process of solidification of the melt In a solution gaseous liquid or solid solution however the release of heat of crystallization effects a higher rate of transport of nutrient material to the crystal surface This transport is relevant and necessary because in a solution the solute particles which are about to gather and conform into a crystal lattice are separated from each other and from the surface of the growing crystal by solvent particles Generally the latent heat of fusion before it is released to the ambient fluid is used for the form generating process and after having done the work letting the particles fall into their favorable positions lattice points it is transformed into the extra undirected kinetic energy that is its increase of the particles of the ambient fluid and thus is dissipated over this fluid PRZIBRAM Ibid p 113 Perhaps we could rephrase all this a little bit The higher potential energy of the fluid solution melt consists in the fluid being disorganized i e the ever changing positions of the particles being such that the system is at conditions of supersaturation or undercooling far away from its stable state which stable state consists in the lowest energy congfiguration of its particles which configuration is the organization into some definite lattice Normally we can recognize this higher potential energy already by the fact that the particles of the solution or melt stand farther away from each other than they do in the corresponding solid Water however is as we know an exception When liquid water freezes it expands that is the particles settle at places such that they come to stand farther apart from each other Of course in the formation of ice directly from water vapor the particles do come closer together In the case of liquid water freezing to ice we must imagine that at the freezing point the system of particles is like a spring that is pressed instead of stretched like in almost all other cases other substances that is has ended up in a situation such as if being pressed So if we let the system freeze we allow this tension to be relaxed and the system spontaneously expands and organizes its particles into their energetically favorable positions lattice points So also in this case we see a decrease of potential energy With substances that differ in this respect from water the spring is under conditions of supersaturation or undercooling initially held in a stretched position and upon letting them crystallize or freeze by adding suitable nuclei we let go this stretching and this results in the system to spontaneously contract yielding crystals whose density is higher than that of the corresponding melt In all cases whether it will be water or any other substance the spontaneous transition from liquid to solid involves relaxation And as long as this relaxation hasn t yet happened still at conditions of supersaturation or undercooling the system has high potential energy Upon crystallization this potential energy is converted into directed kinetic energy of particles resulting in their being taken up into a crystal lattice that is the particles fall into their energetically favorable positions either by a structured widening as in the freezing of water or by a structured compaction as in most other cases where both processes widening and compaction are relaxations Generally we must say that for a process and thus also crystallization to be possible a difference in potential energy levels must be present where the system possesses the higher level However the ultimate condition that must be fulfilled for a process to be possible is that the net entropy of the system as a whole would as a result of this process increase Only then we have the ultimate potential energy difference that determines the feasibility and direction of the process Considering crystallization from a melt After the particles have fallen into their favorable positions their directed kinetic energy which has led them toward these positions will be converted into undirected randomized kinetic energy of the particles of the melt which energy now is just heat as it is precisely the case with a stone fallen from some height above the ground that upon hitting the ground inelastically creates heat at the site of impact That is the already randomized average kinetic energy of the particles of the melt has just increased the temperature goes up but quickly goes down again as it is in the oscillation described above this low amplitude oscillation in fact represents the constant temperature at which the liquid state melt and the solid state crystal are in equilibrium And if this released heat of crystallization is effectively removed from the system or at least from the actual site of crystallization the latter can go on at a constant temperature heat removed heat evolved by continued crystallization which is the solidification point of the given substance and is also the mentioned temperature at which the two phases are in equilibrium with each other In the case of crystallization from a solution as in crystallization of a some salt say potassium chloride sodium sufate or whatever in water or crystallization of water vapor in the air as is the case in snow crystal formation the situation is somewhat different from crystallization from a melt For all this we must bear in mind that the equation where we will write for DELTA meaning change of G H T S where G Gibbs free energy H enthalpy T absolute temperature and S entropy determines the feasibility in principle for some given transformation In our case such a transformation could be 1 the melting of a given substance 2 the solidification crystallization of that substance from its melt 3 the dissolution of a given substance in some solvent or 4 the crystallization of this substance from its solution The solubility of a solute in a solvent is its concentration in a saturated solution where the concentration is the amount of dissolved solute which can be measured in several ways grams of solute grams of solution moles of solute liters of solution humidity etc The solubility changes with temperature and either increases or decreases with increasing temperature depending on the nature of the solute for a given solvent The heat evolved or absorbed when the solute dissolves to produce a saturated solution is as we know the heat of solution If depending on the nature of the solute heat is evolved when solution occurs the heat of solution DELTA H is a negative quantity where it codetermines the value of the change in the Gibbs free energy with respect to spontaneous dissolution If on the other hand for other solutes it is absorbed DELTA H is positive where it also codetermines the value of the change in the Gibbs free energy with respect to spontaneous dissolution So it seems that if we conceptually reverse the process that is if we let a solute crystallize from a solution heat can be absorbed or in the case of other solutes heat can be released When upon dissolution heat is absorbed DELTA H 0 as in say the dissolution of potassium chloride KCl in water See the solubility curve for KCl further below this heat is released again upon crystallization DELTA H 0 So this latter enthalpy term contributes to making the Gibbs free energy DELTA G negative in DELTA G DELTA H T DELTA S for crystallization feasibility So in this case we have the same situation as in the case of solidification from a melt There heat is always evolved because there we have only a one component system that is there is no interaction between say a solute and the solvent And there that is in solidification from a melt we have always relaxation taking place with a release of heat that in melting was the absorbed heat of fusion and now is that same heat released as heat of crystallization When on the other hand heat is released upon dissolution DELTA H 0 as in say the dissolution of dry sodium sulfate Na 2 SO 4 in water this heat is apparently absorbed again upon crystallization DELTA H 0 So here we have apparently a case of crystallization where heat is taken in instead of being released that is to say the stability under these conditions of crystalline Na 2 SO 4 is less stable than its solution Thus the enthalpy term which is now positive works against the Gibbs free energy DELTA G becoming negative in DELTA G DELTA H T DELTA S for crystallization feasibility So it seems that here the entropy term T DELTA S must counteract this enthalpy term in order to make DELTA G negative Thus when crystallization of this salt sodium sulfate actually occurs indicating that under the given circumstances it was feasible anyway this entropy term should then more than compensate for the positivity of the enthalpy term How this is accomplished which could be a problem because a crystal has lower entropy than its solution might be explained as follows In fact it that is the alledged compensation is not accomplished at all The entropy of the resulting crystal is not higher than that of its solution under the same circumstances and the enthalpy term must be negative How could that be For this to explain we must come to know a little bit more about hydrates and anhydrites of salts For us it is sufficient to see how things are in this respect in the case of sodium sulfate Sodium sulfate can exist in several forms depending on temperature and pressure A form in which ten water molecules are chemically bonded to each unit of Na 2 SO 4 in the crystal lattice crystal water thus having as its formula Na 2 SO 4 10 H 2 O and called the decahydrate As a solid it is crystalline A form in which seven water molecules are chemically bonded to each unit of Na 2 SO 4 in the crystal lattice crystal water thus having as its formula Na 2 SO 4 7 H 2 O and called the heptahydrate As a solid it is crystalline In all conditions of temperature and pressure this form is at most metastable which means that there is always another form possible that is more stable We will not pursue this form any further The dry form that is the water free form having as its formula Na 2 SO 4 is called the andydrite As a solid it can be in a crystalline state Handwörterbuch der Naturwissenschaften 1912 VI p 393 old reference but plain knowledge Let us further investigate sodium sulfate s decahydrate and anhydrite by means of their solubility curves According to OUELLETTE Introductory Chemistry 1970 p 176 Figure 8 4 the solubility curves of Na 2 SO 4 and of Na 2 SO 4 10 H 2 O in water are quite different For the former dried sodium sulfate the curve beginning at about 33 0 C goes down with increasing temperature meaning that heating of the solution is counteracted by less sodium sulfate being dissolved which in turn means that upon dissolution of this salt heat is released in line with Le Châtelier s principle The dissolution process is in this case exothermal For Na 2 SO 4 10 H 2 O on the other hand the solubility curve increases steeply with increasing temperature and halting at about 33 0 C meaning that heating of the solution is counteracted by dissolving still more of this salt which in turn means that heat is absorbed as a result of dissolution And for this latter case this implies that in the reverse process that is crystallization of Na 2 SO 4 10 H 2 O this heat of dissolution is released again making the enthalpy term definitely negative See next Figure Figure above Solubility of solid sodium sulfate decahydrate left and of solid sodium sulfate anhydrite in water as a function of temperature curves are approximate If we dissolve more and more decahydrate in water at a temperature below 32 6 0 C say at 20 0 C we will eventually obtain a saturated solution that is a solution that is saturated with respect to the decahydrate at that temperature If we now increase the temperature the solubility of the decahydrate increases according to its solubility curve So we can dissolve still more of it in the same volume of water At 32 6 0 C we can dissolve a maximum amount of decahydrate because as soon as we while still preparing a saturated solution get past 32 6 0 C our solution suddenly becomes supersaturated with respect not to the decahydrate but to the anhydrite The solution went upon passing 32 6 0 C from a saturated solution with respect to the decahydrate to a supersaturated solution with respect to the anhydrite So when suitable crystallization nuclei are present solid anhydrite will be precipitated All the above is according to a textbook of inorganic chemistry Beginselen der Chemie by Van ERP p 34 but is expected to be found in any such textbook and the solution will now that is at a temperature above 32 6 0 C be saturated with respect to the anhydrite So we see the temperature stability regions of the decahydrate and the anhydrite lying to the left of resp to the right of 32 6 0 C Below we will discuss the origin and expenditure of the potential energy stock with respect to several cases of crystallization With respect to sodium sulfate we will treat the decahydrate and the anhydrite separately that is to say we dissolve the decahydrate at a temperature below 32 6 0 C supersaturate the solution and then let crystallize the decahydrate again After this we dissolve the anhydrite at a temperature above 32 6 0 C supersaturate the solution and then let crystallize the anhydrite again See next Figure Figure above Dissolution saturation supersaturation and crystallization of the decahydrate of sodium sulfate and of the anhydrite of sodium sulfate So while in crystallization from a melt the released heat must be led away and thus letting the temperature to remain constant and this must be so if we want to use the Gibbs free energy as a thermodynamic potential whose minimal value is the attractor of the system the same is the case in crystallization from a solution of substances like potassium chloride or at a temperature below 32 6 0 C the decahydrate of sodium sulfate the heat that was absorbed upon dissolution is released again upon crystallization of these substances In the case of crystallization from a solution of the anhydrite of sodium sulfate however things are a bit different Upon dissolution at a temperature above 32 6 0 C heat was released So it cannot make up a potential energy stock However upon forcibly supersaturating the solution of the anhydrite heat must be absorbed see below and thus a potential energy stock is built up nevertheless Upon crystallization of the anhydrite this absorbed heat is released again And to keep the crystallization going supersaturation must be maintained Let us summarize the cases of 1 melting of some crystalline substance 2 crystallization of that substance from its undercooled melt 3 dissolution of a substance like potassium chloride in water 4 crystallization of this substance from its supersaturated aquous solution 5 dissolution of sodium sulfate decahydrate in water below 32 6 0 C 6 crystallization of this substance from its supersaturated aquous solution below 32 6 0 C 7 dissolution of the anhydrite of sodium sulfate dry sodium sulfate in water above 32 6 0 C 8 crystallization of the anhydrite from its supersaturated aquous solution above 32 6 0 C 9 dissolution of a gas in a mixture of gas es such as the dissolution of water vapor solute in the air solvent resulting in a gaseous solution 10 crystallization of a solid from a gaseous solution such as the crystallization of snow crystals from water vapor in the air where DELTA change of Origin of potential energy stock for the form generating process Melting of some crystalline substance G H T S for state transition crystalline phase melt Heating a solid assuming it not to chemically disintegrate upon heating and melting raises its temperature until its melting point has been reached After this the continued adding of heat energy does not increase the temperature anymore but is used to randomize the particles of the solid that is destroying its crystal lattice The heat needed for this destroying is the heat of fusion and gets stored as potential energy latent heat Above the melting point the stability of the liquid phase is higher than that of the solid phase despite the fact that H is positive heat absorbed because that will be more than compensated for by the increase in entropy of that part of the solid that is in the process of melting the particles of the solid become randomized with respect to their position T S is positive guaranteeing the negativity of G of the feasibility of the molten condition That is to say melting is possible under these conditions temperature above melting point Here the absorbed heat together with the increase of the entropy of the system makes up the potential energy surplus potential energy stock as the melt later becomes undercooled And although heat is absorbed H positive and thus lowering the entropy of the surroundings from where this heat was taken this must in accordance with the Second Law of Thermodynamics be more than compensated for by the increase of entropy there where the solid is melting T S positive Expenditure of the potential energy stock in form generation Crystallization of that substance from its undercooled melt G H T S for state transition melt crystalline phase Upon crystallization solidification of a melt the heat of fusion is given off again The enthalpy term H is negative Under conditions of undercooling the crystalline phase is more stable than the corresponding liquid phase And this more than compensates the decrease of entropy of the subvolume where the crystals are formed The negativity of H means that the crystallization process in this case is exothermal It uses the above mentioned potential energy stock of the melt The negativity of H also means that the entropy of the surroundings will increase and satisfying the Second Law of Thermodynamics more than compensates for the decrease of entropy there where the crystalline phase appears T S negative Origin of potential energy stock for the form generating process Dissolution of a crystalline substance like potassium chloride KCl in water G H T S for state transition crystalline phase aquous solution KCl etc To begin with the next Figure gives the solubility curve of potassium chloride As long as the solution of potassium chloride is undersaturated the dissolution process will continue if enough material is available in the system Heat is absorbed according to the solubility curve of such a substance H is positive but the entropy term more than compensates this because the solution which is a mixture has a much higher entropy than the pure substances solute and solvent being separated from each other so the change of Gibbs free energy for dissolution feasibility is negative and thus making dissolution possible Here the absorption of heat together with the increase of entropy of the system makes up the potential energy surplus potential energy stock as soon as the solution becomes supersaturated Although heat is absorbed H positive because of the dissolution process and the entropy of the surroundings must therefore decrease for the system the entropy increases and according to the Second Law of Thermodynamics the latter must be such that the net overall entropy increases Expenditure of the potential energy stock in form generation Crystallization of this substance KCl etc from its supersaturated aquous solution G H T S for state transition aquous solution crystalline phase KCl etc As long as the solution is supersaturated and initially crystallization nuclei being present crystallization continues Heat is being released H is negative So the enthalpy term already contributes to making the Gibbs free energy difference G for crystallization feasibility negative and more than compensates for the decrease in entropy of that part of the solution where crystals are formed T S negative So also here the crystallization process is exothermal It uses the above mentioned potential energy stock of the solution The negativity of H also means that the entropy of the surroundings will increase and satisfying the Second Law of Thermodynamics more than compensates for the decrease of entropy there where the crystalline phase appears Origin of potential energy stock for the form generating process Dissolution of crystalline sodium sulfate d e c a h y d r a t e Na 2 SO 4 10 H 2 O in water at a temperature below 32 6 0 C G H T S for state transition crystalline phase aquous solution sodium sulfate decahydrate See to begin with next Figure Figure above Dissolution saturation and supersaturation in water of the decahydrate of sodium sulfate at a temperature below 32 6 0 C As long as the solution of the decahydrate is undersaturated the dissolution process will continue if enough material is available in the system Heat is absorbed according to the solubility curve of the decahydrate H is positive but the entropy term more than compensates this because the solution which is a mixture has a much higher entropy than the pure substances solute and solvent being separated from each other so the change of Gibbs free energy for dissolution feasibility is negative and thus making dissolution possible Here the absorption of heat together with the increase of entropy of the system makes up the potential energy surplus potential energy stock as soon as the solution becomes supersaturated Although heat is absorbed H positive in and in virtue of the dissolution process and the entropy of the surroundings must therefore decrease for the system the entropy increases and according to the Second Law of Thermodynamics the latter must be such that the net overall entropy increases Expenditure of the potential energy stock in form generation Crystallization of the d e c a h y d r a t e of sodium sulfate Na 2 SO 4 10 H 2 O from its supersaturated aquous solution at a temperature below 32 6 0 C G H T S for state transition aquous solution crystalline phase decahydrate To begin with see next Figure Figure above Crystallization from water of the decahydrate of sodium sulfate at a temperature below 32 6 0 C As long as the solution is supersaturated and initially crystallization nuclei being present crystallization of the decahydrate continues at a temperature below 32 6 0 C Heat is being released H is negative So the enthalpy term already contributes to making the Gibbs free energy difference G for crystallization feasibility negative and more than compensates for the decrease in entropy of that part of the solution where crystals are formed T S negative So also here the crystallization process is exothermal It uses the above mentioned potential energy stock of the solution The negativity of H also means that the entropy of the surroundings will increase and satisfying the Second Law of Thermodynamics more than compensates for the decrease of entropy there where the crystalline phase appears Origin of potential energy stock for the form generating process Dissolution of crystalline sodium sulfate a n h y d r i t e dry sodium sulfate Na 2 SO 4 in water at a temperature above 32 6 0 C G H T S for state transition crystalline phase aquous solution sodium sulfate anhydrite To begin with see next Figure Figure above Dissolution saturation and supersaturation in water of the anhydrite of sodium sulfate at a temperature above 32 6 0 C As long as the solution is undersaturated the dissolution process will continue if enough material is available in the system According to the solubility curve of the anhydrite heat is evolved H is negative T S is positive As has been said heat is evolved It is dissipated into the environment and cannot be retrieved So when dissolution of dry sodium sulfate is still going on no energy is in fact stored in the system as latent heat The high potential energy of the initially unmixed solute solvent system dry sodium sulfate and water decreases G negative upon dissolution Therefore in the present case dissolution proceeds spontaneously even when no heat were available to take in H is negative Heat release however slows down the dissolution of dry sodium sulfate because the temperature of the solution increases and then the solubility of the anhydrite decreases So in order to continue dissolution the released heat must constantly be removed from the site of dissolution just as reversely heat must be supplied to the site of dissolution in the case of potassium chloride and also in the case of the decahydrate of sodium sulfate Although when dissolving anhydrite in water of above 32 6 0 C heat is evolved when going from undersaturated to saturated this cannot according to me see below be the case anymore when continuing from saturated to supersaturated otherwise supersaturation of the solution would be a spontaneous process if enough solute available which it clearly is not So upon super saturation of the anhydrite in water above 32 6 0 C heat must be absorbed And it is this heat which is released again upon crystallization It together with the increase of entropy of the system constitutes the potential energy stock of the supersaturated solution While in all endothermal dissolution processes heat is absorbed all the way from undersaturated to supersaturated in exothermal dissolution processes such as that of our anhydrite heat is only absorbed from the saturated condition onwards to the supersaturated condition The latter case exothermal dissolution can be elucidated a little more If we go from saturated solution of anhydrite to supersaturated we initially have still solid anhydrite present in a saturated solution above 32 6 0 C that is saturated with respect to anhydrite Upon dissolution and also upon supersaturation the entropy term T S is favorable a solution has a higher entropy than that of the solid state that is the state of solute and solvent being separated But because a solution becoming supersaturated further dissolution of the solid into the already saturated solution is not a spontaneous process the enthalpy term H should clearly be unfavorable that is H must be positive which means that heat is absorbed This heat is used to destroy the crystal lattice that is to randomize the particles of the crystalline residue still remaining in the saturated solution and stored as potential energy So while dissolving sodium sulfate anhydrite up to saturation is exothermal continued dissolution making the solution supersaturated must be endothermal As such this stored heat contributes to the potential energy stock In the present case heat is at first released undersaturation saturation after which some heat is absorbed saturation supersaturation This changes the entropy of the surrouindings The entropy of the system increases And according to the Second Law of Thermodynamics these changes must be such that the net overall entropy increases Expenditure of the potential energy stock in form generation Crystallization of the a n h y d r i t e of sodium sulfate Na 2 SO 4 from its supersaturated aquous solution at a temperature above 32 6 0 C G H T S for state transition aquous solution crystalline phase anhydrite To begin with see next Figure Figure above Crystallization from water of the anhydrite of sodium sulfate at a temperature above 32 6 0 C As long as the solution is supersaturated and initially crystallization nuclei being present crystallization of the anhydrite continues at a temperature above 32 6 0 C Upon super saturating the solution of the anhydrite at a temperature above 32 6 0 C heat is absorbed as explained above Upon crystallization of the anhydrite this heat is released again having H negative And because the entropy of the system decreases T S is negative as such contributing to the positivity of G that is unfavorable for the process at issue But because crystallization of the anhydrite actually takes place that is it is regularly observed to take place under the conditions of supersaturation temperature above 32 6 0 C and the presence of suitable crystallization nuclei we must assume that the negativity of H also in the present case more than compensates for the unfavorable entropy term T S So also here the crystallization process is exothermal It uses the above mentioned potential energy stock of the supersaturated solution The negativity of H also means that the entropy of the surroundings will increase and satisfying the Second Law of Thermodynamics more than compensates for the decrease of entropy there where the crystalline phase appears Origin of potential energy stock for the form generating process Dissolution of a gas in a mixture of gas es such as the dissolution of water vapor solute in the air solvent resulting in a gaseous solution G H T S for state transition gaseous phase gaseous solution gas vapor water vapor air It is well known that warmer air can hold more water vapor per unit volume of air than colder air can which means that the solubility of water vapor increases with temperature So upon heating this system responds with dissolution of more water vapor if available which according to Le Châtelier s principle means that upon dissolution of water vapor in the air heat is absorbed So H is positive but the entropy term more than compensates this because a solution has a much higher entropy than the solvent and solute in a separated condition which we can see as a hyper under satuated solution So G for dissolution feasibility is negative meaning that dissolution is possible under conditions of undersaturation And as long as the air is undersaturated the dissolution will continue provided that material water continues to be available Here the absorption of heat together with the increase of entropy make up the potential energy surplus potential energy stock as soon as the solution becomes supersaturated Although heat is absorbed by the system H positive because of the dissolution process and the entropy of the surroundings must therefore decrease for the system the entropy increases and according to the Second Law of Thermodynamics the latter must be such that the net overall entropy increases Expenditure of the potential energy stock in form generation Crystallization of a solid from a gaseous solution such as the crystallization of snow crystals from water vapor in the air G H T S for state transition gaseous solution crystalline phase snow Here upon crystallization the heat of solution absorbed when dissolution took place is released again so H is negative and as such contributes to the negativity of G for crystallization feasibility and when the temperature is sufficiently low more than compensates for the decrease of entropy there where the crystalline phase is formed This decrease of entropy of the product of crystallization as compared with the initial solution causes S to be negative and thus causing the term T S to be negative because the absolute temperature is positive in every process And by subtracting this negative term from H works against G to become negative But because T is low supposed to be below 0 0 C the negativity of the term T S is not very significant as is then also not very significant its contribution to the positivity of G And because in the presence of suitable crystallization nuclei such an undercooled solution indeed is observed to give rise to crystal formation we know that the degree of negativity of H more than counterbalances the just mentioned insignificant contribution to positivity So G for crystallization feasibility will be negative implying that crystallization is possible under the mentioned circumstances H being negative means that also in the present case the crystallization process is exothermal It uses the potential energy stock of the solution The negativity of H also means that the entropy of the surroundings will increase and satisfying the Second Law of Thermodynamics more than compensates for the decrease of entropy there where the crystalline phase appears Let us now give a more compact and therefore more surveyable account of the above list concentrating on origin and expenditure of the potential energy stock with respect to the form generating process crystallization and the entropy changes of the system and its surroundings Here H change in enthalpy in G H T S relates to the stabilities of initial and final states and therefore to heat absorption or heat release effecting a change in entropy of the surroundings of the system proper The term T S relates to the change in entropy S of the system that is it relates to the difference in entropy between initial and final state of the system In all natural processes some heat is always dissipated and therefore these processes are irreversible And this means that not only the net entropy cannot decrease it must increase That is to say that the sum of the entropy change of the system and the entropy change of the surroundings is always positive The overall entropy that is the entropy of the Universe increases Second Law of Thermodynamics for irreversible processes Melting of some crystalline substance Active heat is absorbed H positive and transformed into potential energy Entropy of system increases T S positive Entropy of surroundings decreases H positive Overall entropy increases Absorbed heat entropy increase of system potential energy stock for future form generation Crystallization from undercooled melt Upon undercooling of the melt active heat energy kinetic energy of particles is forcibly drained from the system And now the potential energy stock becomes an energy source for the form generating process Because of the lowering of the kinetic energy of the particles their attractive forces become dominant Upon ensuing crystallization solidification of the melt Heat is liberated that is potential energy is transformed into active heat and therefore H is negative Entropy of the system decreases T S negative Entropy of the surroundings increases H negative Overall entropy increases Dissolution of crystalline potassium chloride KCl in water Active heat is absorbed H positive and transformed into potential energy Entropy of system increases T S positive Entropy of surroundings decreases H positive Overall entropy increases Absorbed heat entropy increase of system potential energy stock for future form generation Crystallization of potassium chloride KCl from its supersaturated aquous solution Upon forcibly supersaturating the solution for example by letting the solvent of a saturated solution slowly evaporate the particles of the solute are being crowded together Therefore their attractive forces by which they attract each other become dominant Upon crystallization Heat is liberated that is potential energy is transformed into actual heat and therefore H is negative Entropy of system decreases T S negative Entropy of surroundings increases H negative Overall entropy increases Dissolution of crystalline sodium sulfate d e c a h y d r a t e Na 2 SO 4 10 H 2 O in water at a temperature below 32 6 0 C Heat is absorbed H positive Entropy of system increases T S positive Entropy of surroundings decreases H positive Overall entropy increases Crystallization of sodium sulfate d e

Original URL path: http://www.metafysica.nl/ontology/general_ontology_29m5d.html (2016-02-01)

Open archived version from archive