Purpose
Donating = Loving
Bringing you atheist articles and building active godless communities takes hundreds of hours and resources each month. If you find any joy or stimulation at Atheist Republic, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.
Log in or create an account to join the discussions on the Atheist Republic forums.
Interesting that Guevarra Erra himself, admits that the results are not watertight!
Indeed, the LZ complexity of one of the four epileptic patients in the first analysis showed no change between seizure and alert states (although that person did remain conscious during part of the seizure). In another individual, LZ complexity actually increased in the second analysis while that person was asleep. Guevarra Erra says that he and his colleagues didn't carry out a statistical analysis of their results in part because of the "very heterogeneous" nature of those results. But he nevertheless remains "highly confident" that the correlations they have identified are real, particularly, he argues, because they were seen in "two very different sets of data".
Peter McClintock, a physicist who works on nonlinear dynamics at Lancaster University in the UK, describes the research as "intriguing" but says that the consciousness–entropy correlation should be confirmed using a larger number of subjects. He also suggests investigating "what happens in other brain states where consciousness is altered", such as anaesthesia.
On topic however(in regards to the original post), I don't think anyone or anything can truly say what is the purpose of humans or life.
But that said, I am quite drawn to a quote by Nobel prize winning physiologist Albert Szent Gyorgyi who said, "Life is nothing but an electron looking for a place to rest".
I agree strongly with that quote.
To me, all other explanations is humans trying to make themselves more relevant and important then they actually are.
That is superb! And I agree Algebra! We put far too much emphasis on ourselves.
I would only rephrase it to say, Life begins with an electron looking for a place to rest.
Only because information is lost the further we boil things down. Atoms may be the canvass, but they're not the picture. There's a reason why neuroscientists are not psychologists, geneticists are not neuroscientists, and chemists are not geneticists (generally speaking). Because each field studies the emergent properties that are built upon, but are absent from, the lower fields.
If it was truly "nothing but" then all we would need in science are chemists.
- "Life begins with an electron looking for a place to rest"
No, because that would be scientifically inaccurate.
How so?
I don't detect the relevance of your response above. Does it invalidate the source by Mateos et al used in my hypothesis?
- It certainly begs the question in how you build your hypothesis on an hypothesis that is as yet unverified and of which the results are not water tight as Guevarra has stated.
Furthermore, fellow scientists in the field believe whilst it is an interesting avenue, it is yet to be comprehensively tested, analysed and to have yielded the results required to have it validated.
1.) Consider that "an electron looking for a place to rest" may perhaps be observed in a principle from science.
2.) Recall that purpose may mean principle, and there are many principles in science.
3.) My hypothesis reasonably underlines yet another principle in science.
- In science a hypothesis is an idea or explanation that you then test through study and experimentation,
Since your 'hypothesis' cannot do that, I don't think it is wise to link it in likeness to principles in science.
1.) My hypothesis (with the support of several equations) underlines a mathematical sequence, such that there is reasonably some measure of macrostate partition {X}, such that human intelligence 'C' is exceeded. (i.e. AGI/ASI.)
2.) Contrarily you did make that claim, and as a result you either appeared to lack understanding of my hypothesis, or appeared to not have read beyond the first line of my hypothesis:
3.) You failed to see that beyond the human brain, the equations indicated adaptive behaviour as a non equilibrium process in open systems. Therefore, there was no need for me to "cherry pick", as the broad scope of Alex Gross' statement together with the context of his entire paper, supports my hypothesis!
3.b.) I don't detect the relevance of your expression of the difference between facts/laws and potential models.
4.) Recall again, that purpose may mean principle, and there are many principles in science.
4.b.) My hypothesis reasonably describes yet another principle in science; that may describe some relation C ∈ {X}, (where C represents some Stirling approximation of human brain state (via S = N ln(N/N − p) − p ln(p/N − p)) on Shannon entropy via Mateos et al, and {X} some representation of system space of macrostates, as underlined by Alex Wissner Gross) and thereafter, my hypothesis underlines that some relation exists such that {X} subsumes some larger measure of entropy maximization methodology, i.e. AGI/ASI, beyond C.
Of course, AGI/ASI. is not yet precisely defined, so that measure (the additionally novel mathematical notation that you appeared to request) although permitted by the laws of physics, is not contained in my hypothesis!
I am working on a novel learning model that may be a feasible measure of {X} above that may reasonably approximate some degree of artificial general intelligence, although that's in its infancy:
Reference: Supersymmetric Artificial Neural Network .
- None of which support the link to purpose
- I said "Having read your hypothesis and the linked paper I would say that the paper just shows some correlation.
I cannot see where they may claim a causal link in any way, in fact it actually seems to just deal with the organisation of the brain,
its structure and also function."
where have I stated that as you put it, " He/she claimed that my paper merely regarded equations about the brain"
You misrepresented my position quite badly there.
- Wrong again, I accept the findings of the papers, what I do not accept is any link to purpose.
This is what you have come up with, using quotes from these papers so support your 'hypothesis'.
- Recall again, How hypothesis in science requires an idea or explanation that you then test through study and experimentation,
Once you can offer a 'hypothesis' that follows suit, then it shall be treated as such.
- Thank you for finally telling us your usage of entropy
- So can we safely now agree, your hypothesis is not strictly a scientific hypothesis (as it provides no observational testing),
It relies heavily on a paper that's own authors admit has non water tight results and isn't proven as of yet.
You offer no predictions, observational testing nor any data for analysis.
- It is I would concede extremely interesting, but the paper you cite is infancy as is your 'hypothesis', But I wish you luck.
Sounds fancy huh? Of course anyone who knows what Stirling's approximation is, knows this statement is gibberish.
I am glad you are here Nylar, as then I do not have to take out a few textbooks to wade through the above postings to come to same conclusion you have. I can just take your word for it, as, so far, all your discussions that I have read, have been rock solid on the actual educated conclusion to these subjects.
Shame. Don't lend your intellect to anyone that's not you.
I appreciate that LogicFTW.
For those who don't know:
4 factorial, written a 4!, means 4 x 3 x 2 x 1 = 24
4! was easy to get a number for, but what about 50! which is 50 x 49 x 48 x 47 x ... =?
Not so easy, no one wants to do all that multiplication, but some people came up with a way to estimate it (one of them was named Stirling). It is just a mathematical trick to assign a number to a large factorial. Using the method named after Stirling: 50! is about 3 x 10^64.
In short: Stirling's approximation is a method of converting one nasty number (a large factorial) into a number you can more easily use. Knowing that let's reread what the OP wrote:
By saying a Stirling approximation of a human brain state he is saying the human brain state is a number (just a number, like 7 or 10^64). Now I'm sure he read somewhere where someone used a Stirling approximation to get an approximate count of something in the brain (I fully expect him to link something to this effect). But it sure as fuck wasn't the state of the human brain. Because even the OP isn't crazy enough to think that you can describe everything there is to know about someone's brain with a single number. He's citing a mathematical trick as something fundamental; because he doesn't have a clue what he is talking about. He has confused a tree for the forest because he doesn't know the difference.
Nyar - " Now I'm sure he read somewhere where someone used a Stirling approximation to get an approximate count of something in the brain (I fully expect him to link something to this effect)"
Yean, unless I'm blind, it seems like he did link it. Its literally in the part of his quote that you left out. You follow the link and get this:
"However, the estimation of C (the combinations of connections between diverse signals), is not feasible due to the large number of sensors; for example, for 35 sensors, the total possible number of pairwise connections is [1442] = 10296, then if we find in the experiment that, say, 2000 pairs are connected, the computation of [102962000] has too large numbers for numerical manipulations, as they cannot be represented as conventional floating point values in, for instance, MATLAB.
To overcome this difficulty, we used the well-known Stirling approximation for large n : ln(n!) = n ln(n)˘n. The Stirling approximation is frequently used in statistical mechanics to simplify entropy-related computations. Using this approximation, and after some basic algebra, the equation for entropy reads, S = N ln(N/N −p)−p ln(p/N −p), where N is the total number of possible pairs of channels and p the number of connected pairs of signals in each experiment (see Results for details and notation). Because this equation is derived from the Shannon entropy, it indicates the information content of the system as well."
I personally don't know what any of this math means; but it doesn't seem as nonsensical as you made it out to be. I'm more tempted to believe you're once again misinterpreting, misapplying, and misunderstanding what someone is saying.
Ask for clarification, before making accusations.
The one way you could have a Stirling approximation of the state of a brain, is if someone's entire brain and all of its functions can be scooped out and replaced with a single number, like 7 or 2^50.
edited
It would be like making a complete description of a house by listing its mass only; by claiming that bathrooms, square footage, color, and location are not attributes that houses have.
To Programming, in the context of this debate, purpose can mean only one thing.... What we were created for. So when you stop to consider that we are only here by a one in a bazilion chance occurrence, followed by a billion years of evolution, again, we have no purpose.
Sure, you can assign a purpose onto yourself.... I could say my purpose is to produce fine art photography, but the reality is, the world will not miss me for 1/2 a second when I'm gone.
If their were a God, then I guess our purpose would be his amusement in our pain and suffering, but of course most of us no that's not true either.
?
1. Science is not gibberish.
2. That expression is not gibberish, it underlines data as described in a paper by Mateos et al, as cited in my hypothesis..
3. Reference-A: "Towards a statistical mechanics of consciousness: maximization of number of connections is associated with conscious awareness."
4. Reference-A, Excerpt: "To overcome this difficulty, we used the well-known **Stirling approximation** for large n : ln(n!) = n ln(n)˘n. The **Stirling approximation** is frequently used in statistical mechanics to simplify entropy-related computations. Using this approximation, and after some basic algebra, the equation for entropy reads, S = N ln(N/N − p) − p ln(p/N − p), where N is the total number of possible pairs of channels and p the number of connected pairs of signals in each experiment (see Results
for details and notation)."
5. Did you actually read the paper?
Attachments
Attach Image/Video?:
1. You ought not to simply take anybody's word, especially if you wish to contribute something of substance to a discussion.
2. Nyalarlatothep was demonstrably invalid; that expression of mine underlines data as described in a paper by Mateos et al, as cited in my hypothesis..
3. Reference-A: "Towards a statistical mechanics of consciousness: maximization of number of connections is associated with conscious awareness."
4. Reference-A, Excerpt: "To overcome this difficulty, we used the well-known **Stirling approximation** for large n : ln(n!) = n ln(n)˘n. The **Stirling approximation** is frequently used in statistical mechanics to simplify entropy-related computations. Using this approximation, and after some basic algebra, the equation for entropy reads, S = N ln(N/N − p) − p ln(p/N − p), where N is the total number of possible pairs of channels and p the number of connected pairs of signals in each experiment (see Results
for details and notation)."
5. This means rather than "gibberish", such is science.
You are right, I should not take anyone's word.
I will readily admit an interest in this subject, but the conversation over my current knowledge of the subject. Perhaps I am just being lazy and avoiding revisiting some college classes I would rather forget heh.
I feel like we are a long ways off from creating a true "AI" Every machine learning and AI experiment so far has been extremely narrow in focus, a broad all purpose self sufficient AI and the current tech still lags behind that of an ant.
I agree with the original post in that we are all insignificant in the vastness in size and time that is the universe. If us humans do not get our act together and preserve this tiny oasis of life on earth, we will most assuredly wink out of existence long before we could create an AI that could even begin to challenge basic entropy laws in a meaningful way.
1.) I read your words carefully, and I advise that you do the same. I genuinely ponder whether you've actually read my hypothesis, or understand the papers it cites!
2.) To begin, you may analyse whether or not human measure "C" (as described by Mateos et al) pertains to (or is compatible with) the partition regime {X} as underlined in Alex Wissner Gross' paper. (See my relation "C ∈ {X}")
3.) In other words, my hypothesis may be falsified, if the relation C ∈ {X} is false. I posit that such a relation is valid, given the equations cited in my hypothesis. (You clearly failed to observe that said relation of mine is neither in the paper by Gross, nor the paper by Mateos et al!)
4.) The word purpose may mean principle, and there are many principles in science.
5.) Reference, Wikipedia/Laws of Science: "The laws of science, scientific laws, or scientific principles..."
6.) People tend to enter discussion not recalling or knowing that purpose may mean principle. They then tend to criticize their feeling about what the word purpose means, instead of what it is actually typically defined to mean, as you demonstrated above!
7.) This means what I underlined in my hypothesis, is reasonably yet another principle in science, i.e. one that may describe the objective/goal of human intelligence, given evidence.
8.) Next time you enter discourse, recall that it is key to look up definition of the word in question, as words often have larger scopes that we may recall.
1.) On the contrary, there are several principles in science, that may describe what particular things were created for; i.e. objectives. (Both principle and objective or synonyms for purpose.)
2.) Science is actually objective (or rather seeks to be objective)
3.) Reference-A: https://en.wikipedia.org/wiki/Objectivity_(science)
4.) Reference-B, Wikipedia/Laws of Science: "The laws of science, scientific laws, or scientific principles..."
5.) Regardless of the minor probability that yielded our existence, our existence may have objectives, or there may be principles that describe what human intelligence may approach, given entropy maximization equations.
a.) Kurzweil, director of engineering at Google, predicts that computers will have human level intelligence by 2029.
b.) As time had passed, learning models have gotten more and more general than you express above. You may now notice that the same type of models via, "Deep Learning" is applicable to several modalities or forms of input data.
These models get more and more general the more "biological priors" or biological brain-like equations researchers incorporate in their models.
Reference: Neuroscience inspired artificial intelligence, Deepmind.
- Firstly I will simply point out where you are being dishonest and a simple thought that a lay person may ask in regards to your thought experiment.
- Firstly, The dishonest part, Clearly you did not read my comment very carefully, and I will now open it to the floor of all members of this forum.
My quote, "Having read your hypothesis and the linked paper I would say that the paper just shows some correlation.
I cannot see where they may claim a causal link in any way, in fact it actually seems to just deal with the organisation of the brain,
its structure and also function".
To which you claimed, "He/she claimed that my paper merely regarded equations about the brain"
Where is the correlation? You are being utterly dishonest.
- Secondly a thought to provoke, You are essentially asserting that all human purpose is to bring about AI. Now let us consider the number of people actively working in this field in comparison to the population of the planet, I would be massively generous and offer you 1% of global population.
Do you see the problem that people may find with this assertion?
- Now to the crux of the matter, Your 'hypothesis' hinges on the work by Mateos Et Al and Ramon Guevarra,
Guevarra has already admitted that the result of the paper do not offer "water tight" results, And that the first analysis showed no change between seizure and alert states.
Guevarra goes on to concede that they did not carry out a statistical analysis of their results because of the nature of them, So it is still technically up in the air independent of how reasonable it appears(science prefers actual results that match predictions and models).
Finally, And this is quite the coup de grasse to your 'hypothesis' is that Guevarra said on the question of entropy, "Personally I would like to have a better understanding of the physical processes taking place in the brain, before employing the label 'entropy', He goes on to explain it was because Perez Velazquez was keen to use the term on the paper.
He adds that fresh experiments are required to measure thermodynamic quantities in subject brains.
- So in closing, Your 'hypothesis' is not a scientific hypothesis, in that you still offer no predictions, no testable observation and have no data that can be analysed, not to mention you fail to correlate how all humans are working towards the goal of creating AI.
The paper by Mateos Et Al that you cite in your claim has not be founded and is disagreed upon not only by scientists within the field but also the co-authors!!!
You are essentially building a hypothesis that's foundations rely on string theory.
I think if PGJ's hypothesis was built to essentially say 'Humanities purpose is to create AGI', that would be more palatable.
Pages