As science expands, the relative percentage of what is known that one scientist can retain in her or his head is diminishing. This is exacerbated by the growing specialization that our modern scientific establishment encourages.
In short, it is getting harder and harder for scientists to keep up with the rapidly expanding knowledge of the scientific community as a whole.
This begs the question, can artificial intelligence to assist with this process?
More: https://goo.gl/RvLz26
@gideonro And finally, looking at the question itself:
What is "science"? What is "knowledge"? What purpose do they serve? Is it merely to extend capability, or do science and knowledge /require/ human comprehension of them? How many humans must comprehend? How /much/ must they comprehend?
Is the capacity of the brain finite? Flexible? Over what timescale? A lifetime? Several generations? Hundreds or thousands?
What happened 200k and 50k and 6k years ago to humans?
@gideonro I don't beg, I raise.
(Cats, /especially/ space alien cats, refuse to beg.)
Organisation is close, though I'll suggest: /science/ is /structure/. It's a way of finding the inner core of knowledge.
In one sense, science has grown. In another, we've discarded an older, or really, /several/ older structures /which were less efficient than those we use now/. Either in that they explained less, or explained it less usefully.
As to the other points, suggestions...
@gideonro That is, /one/ person having knowledge, and either not wanting, or not being able to convey it to others, isn't science.
Corollary: Science is fundamentally social.
And even a group of people having knowledge, if unable to transmit it through space and time to others, is at best, small science.
Corollary: Science requires propogation through time and space.
The answer to "how many" becomes empirical: Enough to satisfy these requirements, at least.
3/
@gideonro If you want an /applied/ science, some capacity to translate science into at /least/ a practical observational practice, and preferably a practical /transformative/ process (industrial, etc.), would be an additional element.
The practitioners -- engineers, operators, technicians -- don't need to understand /all/ the science, but enough of it to form working world-models of their tools and domain in which they are applying them. Failures to do so have a name.
4/
@gideonro We call them "engineering disasters".
I highly recommend Charles Perrow's work in that field.
As to human brain capacity, its finiteness, and evolution: I suspect that it is finite, that it /does/ evolve, but that it evolves at best slowly.
Much of the advance in intellectual capacity we've witnessed in 1000s of years has come from, if you will, mental hygiene, rather than biological evolution. Removing barriers to thought, rather than increasing capacity.
5/
@gideonro We know that brain capacity is hugely susceptible to conditions of natality and childhood development. Improved nutrition, general hygience, and avoidance of infectious disease, as well as of psychological traumas and environmental toxins, all improve the baseline floor.
Availability of education and educational materials, most especially books, is a tremendous factor.
Reducing the obligation of every individual to contribute to hard labour as well.
6/
@gideonro With educational opportunity and talent spotting, the opportunity to not /waste/ talent.
With the capacity to specifically reward intellectual ability (a domain in which I see ... substantial room for improvement), again, the option for intellect to be applied at expanding knowledge or the capacity to work on knowledge: intake, processing, storage, transmission.
Developing better epistemic systems: speech, language, writing, efficient alphabets, printing...
7/
@gideonro All of these increase the efficiency and effectiveness of both general and specialised / autodidactic education (though they can introduce inefficiencies elsewhere -- negative hygiene factors).
The evolution of cultural systems, most especially the shift from religious systems of morality, behaviour, cohesion, and understanding, to scientific and philosophical ones, probably also contributes markedly: substituting evolutionary progress for dogmatic orthodoxy.
8/
@gideonro Simply improving the quality of freshwater supplies, switching from a reliance on alcohol to caffinated beverages (coffee, tea), and carbohydrate sufficiency also almost certainly increase intellectual capacity.
In short: we've been, incrementally over time, giving our brains improved conditions to develop, better training, better tools, and better systems, methods, and structures of knowledge.
What "science" is in 2017 is vastly different from 1717, or 1917.
9/
(or, arguably, 1967)
@sydneyfalk @gideonro Not nearly so much. I'd put the watershed in 1948.
Though if you want to make an argument for 1967, I'm open to hearing it.
I also think that Science 2017 will be markedly different from ... hard to say, but, assuming we don't all fall off the edge of the Earth, 2067 - 2017 is a pretty safe range.
(from what I know of the differences between 48 and 67, honestly, I don't see enough difference to start the argument -- but I'm not enough of a student of history, is likely the issue -- so it's probably my gaps in knowledge, more than anything else)
@sydneyfalk @gideonro Informed or not, I'd be interested in your argument.
I've got a specific reason, dated to 1948, but that's just me. Men in masks, and space alien cats, cannot be trusted.
IIRC, the first faltering efforts towards ARPAnet were in 67 (even though it was 'created' in 69), and to my mind that's a sea change in the ability of information dissemination that itself spread, through the scientists and engineers working military contracts, which led to CSNET, etc. etc.
(I'm obsessed with infotech, though, so the fact that infosci development became a necessity at that point may simply loom larger to my thinking.)
@sydneyfalk @gideonro I'm about to give you another reason for 1967 in the main thread.
(I'm perfectly fine with being wrong or right about it -- my views on history are unlikely to matter, in the long run. ^_^ One of the little consolations that often only console me is the "everything I've ever done will eventually be lost and forgotten", because I'm so very often wrong, and so very good at breaking things without trying.)
@sydneyfalk @gideonro This isn't so much wrong or right, as /having a good model/.
I'm interested in seeing /why/ you believe what you do. I may even beat you up over it a bit, though that's generally done with an interest in testing the model itself, and its foundations, and/or establishing my understanding of it.
Gideon's been through a few rounds of that with me. We disagree on a fair bit, but generally agreeably.
I'm also fine with that -- but often people are very concerned with being Right Or Wrong, and I've spent most of my life being wrong, so I've become less concerned about it when it does happen. ^_^
I can understand wanting to examine and test one's own 'module', so to speak, and comparing the structures of others that occupy similar conceptual places, though. :) I try to do that myself, sometimes.
@sydneyfalk @gideonro I want to /find right/. I also manifestly try to avoid /insisting/ on being wrong.
My organising principle: show respect, especially for the truth.
https://www.reddit.com/r/dredmorbius/wiki/lair_rules
@sydneyfalk @gideonro Anyhow: I don't really care to discuss whether or not you should or shouldn't feel free to state what and/or why you believe something. But for the last time here: I'd be interested in your reasons.
Oh! I gave them, it was the CYCLADES/ARPAnet tech blasting off the launchpad.
I think that stuff had a vast impact on science as a whole, starting in military labs but spreading rapidly, and the wider access sought by scientists to a similar technology effectively revolutionized science as a whole via the storage and communication tools.
Infosci, to me, was a shift like writing or the printing press -- radical, and still shaking out.
@sydneyfalk @gideonro Ah, OK.
I think that was /significant/ (as was the printing press). I don't think it fundamentally changed /science/. Though it's changing access to and rates of change of science -- hygiene factors as I described earlier.
The impacts of the printing press (and ancillary aspects) are also IMO hugely underappreciated. See Elisabeth Eisenstein, Marshall McLuhan. I've written a bit on both. "Printing Press as an Agent of Change".
I think it's also that once the rate of science began to get uncoupled from the number of humans volunteered (or conscripted) in the effort, a primary limiter was gone.
Previously, a given scientist could try to interest other scientists in their work, perhaps pull some in, etc., but they were limited not only by physical aspects but the psychological limitations of leading people into things and perceived difficulties and such.
Once there was a way to set up one's (roughly) thought experiment, shove it into (roughly) a machine that could think about it "for you", and get a result the next day, suddenly your working capacity increased radically.
When those speeds increased and the 'thought experiments' became increasingly abstract, it effectively obviated all the gobs of calculation and physical work that would have been necessary to achieve similar results.
(Anyway -- that's really most of what I've thought on the subject. I honestly don't consider this stuff very often any more, TBH. I think I likely don't have a lot to contribute that others haven't already considered.)
@sydneyfalk @gideonro That's sort of the machine-assisted model.
My concern is when "machine assist" becomes "the machine does all the work". We've seen what happens when simplistic algorithms are unleashed in human spaces. The Banality of Evil. United's Dr. Dao.
I don't think it's avoidable except in degree. Nobody has a full understanding of all the code run on their devices. (They barely read EULAs.) It's micromanagement, and negates the core benefit of delegation -- handing one's intents/needed-next-steps to someone who handles them for you.
The question of how humanity handles the notion that every human may 'be management' to some degree still remains to be seen, but so far, not well. >.>
@sydneyfalk @gideonro The EULA question is somewhat separate.
The issue of what the limits of our understanding and simple /media processing capacities/ are -- in the sense of how many messages, of what complexity, we can assimilate in a day, is another one, and one that I've been looking at heavily for some months.
I'd argue ~10 - 1,000 messages in any given day, across all media. Mostly at the lower end of the scale.
That sounds plausible, but everybody's going to have different limitations. I don't know a damned thing about what's inside my car (beyond the fact it's a combustion engine that runs on gasoline), but that's partly out of necessity -- I can absorb only so much total information over my entire lifetime. (And only recently has my daily complexity limit risen to a decently high level. I have some pretty severe neurological issues.)
@sydneyfalk @gideonro That's actually a large part of the point. We (and the systems we deal with) each have /tremendously/ different capabilities. But we're operating around some capacity to run an awareness loop: OODA
Observe
Orient
Decide
Act
The net throughput, and capacity levels at each stage, vary tremendously, sometimes by individual, sometimes by situation. Economics is actually about finding a net balance within those levels, and ...
@gideonro @sydneyfalk ... where individual actors end up relative to one another based on this.
(That's not what an econ book will tell you, but it's what I'll tell you.)
Psychology is mostly about how our /internal/ processes operate, though with some group dynamics tossed in. Sociology is mostly about the group-level interactions. Ecology is how larger and more diverse groups of plants, animals, bacteria, fungi, viruses, etc., mesh OODA loops.
@gideonro @sydneyfalk Stephen Hawkings is someone who is OK at observing (he can see and hear), very good at orienting (processing those actions) and deciding (coming up with some decision) ... and has virtually no ability to act of his own, even to the extent of being able to communicate. He can do so, but only /very/ slowly.
(I've witnessed this in person, it left a tremendous impression.)
Viruses cannot individually think, but as a population ...
@gideonro @sydneyfalk ... viruses run a massively parellelised Orient - Decide - Act process, by way of genetic mutation and processing. Given their limitations, they can, again as a population, respond very quickly, particularly given their very rudimentary capabilities, to new circumstances.
I've recently suggested that trees don't /have/ brains, but that forests /are/ brains. Trees share chemical signals between leaf and root systems.
@gideonro @sydneyfalk (I guess that makes them decision trees.) They solve ... tree-world problems. Climate, water, nutrients, pests. Maybe even woodsman and beaver and forest-fire problems. I suspect their timecycles are much longer than ours.
Insects may be more like trees than humans, in the sense that they share what might be exosomatic neurotransmitters by way of chemical signalling between individuals. If those chemicals directly set off...
@gideonro @sydneyfalk neural signalling, then it really might make more sense to think of an insect population as having a single mind, responding collectively to environmental responses, largely through chemical signalling processes. The neural wiring for that is almost certainly simpler than the visual and auditory systems we have. (And yes, there's some touch and some visual processing as well.) This is all wild spec on my part, mind, but I'm ...
@gideonro @sydneyfalk ... looking at what literature I can find about this. The chemical signalling of plants and insects is related to some work I'm vaguely familiar with -- "The Secret Lives of Trees" and some of E.O. Wilson's work with ants.
@gideonro @sydneyfalk Going back to a point that you made earlier, yes, changes in information processing are significant, and I suspect that they fundamentally change behavioural dynamics of /any/ system in which they occur. That includes, but extends well beyond humans.
I've been watching and studying the whole internet / fake news / propaganda thing develop, and am thinking that we might well be in the middle of another transformation right now.
@dredmorbius @sydneyfalk @gideonro I've had similar thoughts. The way a system works can be dramatically modified without any explicit structural changes by altering the strengths of various interactions.
@gideonro @sydneyfalk My premise being that /every/ major change in communications for humans has fundamentally changed our social, power, and economic structures: communications, speech / language, writing, maths, printing, literacy, mass communications, and more. There's strong evidence for virtually all of that from writing onward, and some for speech and rudimentary comms as well.
This also changes the question.
@gideonro @sydneyfalk It's not "how will we preserve our current social, political, and economic systems given changes in information technology and dynamics?", but "how will our social, political, and economic systems be transformed, given those changes?"
That's a significant inversion. And history shows that the changes are generally /not/ benign.
Mass comms brough fascism. Very close links there. The US only just avoided that, and maybe not even.
Maybe it's my neurological deficits or my (somewhat) unusual life, but I honestly don't consider "current things" to be preservable -- never did. Everything always changes, it's only a question of how it changes and how to adapt.
I honestly don't think mass communication "brought" fascism, I think it's a human tendency, a Confucian 'the world is wrong' taken far beyond reason and with no self-doubt involved.
I think mass comms made it EASIER, yes -- but they made good things just as easy. The truth to me is that only vigilance and balances can guard against the worst human tendencies and (optionally) encourage the best tendencies.
Anyway -- I'm reaching my limit on hypotheticals for the day, I think. ^_^ I realize some angles I take on this are different from yours, but I do find it fascinating food for thought, regardless. :) Thank you! :)
(sorry if that didn't seem clear earlier, or if Masto just didn't show you it -- but that's why I said 67)
(I mean, Ada and Babbage and Zuse all tried to find ways to create automation, but until it had a bridge to a physical application, they were thought experiments and "good ideas" -- so the accidental underpinnings of that being built for entirely alternative application meant all the seeds of compsci could finally explode into growth -- that's essentially how I've come to view the shift)
@sydneyfalk NB: I've got an "ontology of technological mechanisms" (at https://ello.co). One element I've identified (there are 9) is information: acquisition, parsing, storage, processing, and transmission. Infotech -- including printing presses and clay tablets -- are included. Also senses, artificial perception, language, maths, logic, algos, programming, AI. Which also gets to answering Gideon's question in a way.
@gideonro @sydneyfalk Described in part (updated) here: https://redd.it/5rnjg0
Original (7 mechanisms): https://ello.co/dredmorbius/post/klsjjjzzl9plqxz-ms8nww
I'd also posted some updated images ... somewhere ... recently.
(and I'll have to take a look at them sometime, it sounds like a fascinating ontology)
@sydneyfalk @gideonro I'm trying to draw the bridge between /automation/ -- Jacquard looms and cam-driven operations, and /algorithmic/ operation. It's a significant leap, and seems to require a few steps.
Boole and formal logic (I think that's Wittgenstein, possibly Russell and/or Whitehead) are also involved. Holes in my learning.
(In my concept, I suppose I see algorithmic operation as a natural next step from automation. Humans learned long before how to 'organize' and 'direct' groups of humans, and applied similar principles (with very new constraints) to 'organize' and 'direct' machines.
Algorithms as seed concept were fundamentally in place (or else humanity could not have thrived and grown), but had no way to expand and develop.)
@sydneyfalk @gideonro Right. There's also the role of cybernetics, and either prescribed paths (e.g., automatic tool-cutting or processing equipment), or some form of sensing and feedback systems.
Being able to sense and understand (world-model) the environment tremendously increases the capacity to move within it.
The role of timekeeping (and pole-finding) in navigation is interesting. A number of shipwrecks involved.
@gideonro Where I see a problem (and this is getting to the questions of AI which you raised) is in some of the co-structures and scaffolding which have emerged around both science and the institutions of science. The evolution of scientific disciplines from natural (physics, chemistry, biology, astronomy) and moral (sociology, psychology, economics, political science) philosophy in particular. Those distinctions made some degree of sense in the mid-19th century.
10/
@gideonro They also created a set of disciplines, and sub-disciplines, in which the fundamental task of /fact aggregation/ has been raised to a high art.
Blasphemy: Facts are not science.
Or more specific blasphemy: /unstructured/ facts are not science.
What science hangs from is some central organising principile. The sciences which /are/ well-formed have established their core principle. The ones which aren't haven't. Often for rejecting it.
11/
@gideonro I'll posit that the following sciences are reasonably well-formed: Physics, chemistry, biology, and geology.
But that these are not: Ecology, sociology, psychology, economics, political science.
I'm not saying that the second list are /invalid/ sciences, they're not. But they're poorly formed in that they have no well-articulated central organising principles. The first list lacked this for a long time as well though.
Geology is an interesting case.
12/
@gideonro Geology has existed in one form or another since the Egyptians and Chinese were sorting out how to build water-control systems and understand landforms. It took off particularly with canal-building and coal-mining around the 18th century, though there were earlier applications, substantively from mining. (See Agricola's "De Re Metallica", /the/ standard mining text until the 20th century, despite not being translated from Latin until Herbert & Lou Hoover did.)
13/
@gideonro The basic problems with geology were:
1. The Bible.
2. The geological record, as represented by geological /structure/.
3. "Geological Time".
4. A lack of any known plausible mechanism for unsticking the theoretical model.
The unsticking started in 1896 when Becquerel and the Curies correctly explained radiation, and subsequent work, radioactive decay and half lives, which provided a Very Long Clock that broke Lord Kelvin's 30 mya thermodynamic limit.
14/
@gideonro That is, there was a presumption (at least among some, and so far as I can tell, many) geologists that the Biblical and Geological records had to comport -- the idea that one of these might be entirely false (at least on geological matters) was ... not within the realm of publicly admissible possibility.
And understanding of kinetics and measurements of subterranean heat didn't allow for an Earth more than 30 million years old, absent some new source of heat.
15/
@gideonro And this model conflict existed /despite/ geology being very useful in other regards. Geological strata were well understood, folding and faulting provided explanations for deviations from it (though /how/ folding and faulting occurred was difficult), and more.
But with radioactivity, we suddenly had direct, /measureable/ evidence of rocks that were far older than previously believed: a half-billion years old, by the first decade of the 20th century.
/16
@gideonro Boltwood and Rutherford's experiments showed this.
At roughly the same period, Alfred Wegener was first proposing his "continental drift" theory, based on observable geography and similarities of strata in different parts of the world, most especially on either side of the Atlantic Ocean. But Wegener still lacked either mechanism or time.
Over the next sixty years, we eventually got both, via multiple paths.
https://en.m.wikipedia.org/wiki/Age_of_the_Earth
17/
@gideonro Further samplings of radioactive rocks, including samples from meteorites and ultimately the Moon, pushed the age of the Earth back to 4.5 billion years, +/- 1%, by the mid-1950s. Seismology (and nuclear testing) provided hints of Earth's inner structure: a thin crust over a waxy molten mantle, and an inner, solid, and radioactive core, still many thousands of degrees after billions of years. And convection currents driving continental plates over 100s m.y.
18/
@gideonro Wegener was proved right, and the geological community finally declared, in 1965, with significant publications in 1967 (I said I'd get there), that the theory of plate tectonics was central to explaining all geological phenomena.
That is, it wasn't until 1967 that geology finally had its central, core, organising principle: plate tectonics. All else hangs from that.
The previous 200+ years of the field as a discipline, were all prelude.
19/
@gideonro The fundamental organising principle of biology: self-reproduction with evolutionary refinements in fitness transmitted via genetic material.
Parts of that concept were determined by Charles Darwin, in his Theory of Evolution, but the specific nature of the genetic component wasn't explained until Watson, Crick, and Franklin's work, published in 1953. Ramifications of it are /still/ being established. (And a popular account of which was published in 1968.)
20/
@gideonro The central organising principle of chemistry is the properties of valance electrons, first revealed through Mendeleev's periodic table, in 1869.
Again, subsequent refinement of the theory, understanding of electron orbital structures and shells, bond geometry, van der Waal forces, atomic mass, nuclear structure, isotopes, occurred with time. But chemistry at its heart: Interactions of outermost electrons.
21/
@gideonro Physics, put simply: force propagation on matter and energy through time and space, recursed on time and space itself.
Four forces, several underlying constants, statistical probability emerging as thermodynamics, and quantum uncertainty fill out that spec, but all hang off the central core.
22/
@gideonro But if we look at ecology, psychology, sociology, economics, and/or political science, we don't find that core. Or if one is stated, it holds up poorly to inspection, at least to my experience.
Ecology, psychology, and sociology are best described as /areas of defined inquiry/ but /without a core underlying principle/. This is based on more- or less-rigorous explorations of the spaces, but it's been my general observation. Not that such a core doesn't exist.
23/
@gideonro But at the very least, they've not been generally accepted into the core of the studies yet, nor pitched as such in general or introductory texts.
(An introductory text is precisely where such a core organising principle should be expressed.)
In the case of economics, there is a postulated core, but it is false. I say this on the basis of having completed a major course of study reading in the topic, and much subsequent inquiry. The fact is little disputed.
24/
@gideonro But the field's orthodoxy, and curriculum, fail to admit the error or settle on the correct core model.
Political science is again /a defined area of inquiry/ rather than having a coherent conceptual core. Wikipedia's introduction:
"Political science is a social science which deals with systems of governments, and the analysis of political activities, political thoughts and political behaviour."
https://en.m.wikipedia.org/wiki/Political_science
25/
@gideonro As I've been saying in my side conversation with @sydneyfalk, there's an element of the hygiene factors, if you will, which has facilitated and augmented cognitive capacity: our tools for information management. This began with senses, communication, language & speech, writing, maths, logic, algorithmic processing, programming. And now AI.
One definition of AI is that it's a research-funding category. That's likely a big truth.
26/
@dredmorbius I'm going to have to jump back into this later. Family coming over. Great stuff, however, and I'm eager to read your additional notes...
@gideonro Another construction comes from David Krakauer's observation that "intelligence is search".
If /knowledge/ is /search/ with /explanation/, then Aritificial Intelligence is Search /without/ Explanation.
That is, it is a domain of non-explanatory, but useful, knowledge.
As such, I don't think AI is fundamentally helpful in getting a handle on knowledge proliferation. If anything, the opposite.
http://m.nautil.us/issue/23/dominoes/ingenious-david-krakauer
27/
@gideonro AI will make fact-discovery all the easier, /without/ providing an underlying explanatory framework. Even where we ask AI to find an explanatory framework, it cannot tell us /how it got there/. Which is to say, the /structural/ framework humans rely on for understanding, will be weakened.
One view is that this leads to an increasing Tower of Babel scenario, with every more facts but less understanding.
Another option is that knowledge transcends humans.
28/
@gideonro I'm not sure either of those is particularly appealing.
There are several potential third paths. The one that has a growing appeal to me is to recognise as a new fundamental core of understanding the watershed which was broached in 1948: Claude Shannon's information theory, in combination with network and systems theory.
I'm just realising as I write this a parallel between information, networks, and systems, and the mediaeval curriculum of the quadrivium.
29/
@gideonro The mediaeval curriculum, the Seven Liberal Arts, comprised the trivium (grammar, logic, rhetoric: input, processing, and output), and the quadrivium (maths, geometry, music, and astronomy: numbers, numbers in space, numbers in time, and numbers in time and space).
Networks are complexity in space. Communications is complexity in time. Systems are complexity in time and space. Maybe. This just occurred to me, and it may just be an appealing parallel.
30/
@gideonro The common resolution of the social sciences, and ecology, is that each is an information processing system operating within an open, self-organising system, with costs, rewards, uncertainties (risks), and expensive information processing with different balances of capabilities among different elements within the systems. That is the common core to each of these, with different leaf-node properties among them. Not /insignificant/ differences. But peripheral.
31/
@gideonro Which is to say: if you want to address the complexity overload, look to complexity itself, and the structure of knowledge. Just as earlier false (though temporarily useful) structures previously were used, what science needs to do is focus on the common informational core of a multitude of disciplines.
32/
@gideonro Better tools for managing complexity /only results in more complexity/. That's the false light of AI. Subverting the complexity entirely is the path to understanding.
33/end/
@gideonro I'd argue that /scientific knowledge/ increases the predictability of the universe around us. It is different from /technology/ which is fundamentally concerned with /procedure/ or /mechanism/ toward some artifactual or effective /end/. (J.S. Mill has a nice discussion of this distinction ... somewhere.)
I'm less certain on "how many people must understand", but it should be enough /that practical advantage can be made/, and /knowledge sustained/.
2/