[#!: full-line executeEmbeds, phraseValueList = (("fout fout)("backtrack backtrack))
[#!: path_insertIn_fHand (link d_webWork 'fin Head_one.html') fout ;
Wilhelm Abel, David Fischer - The Great Pricing Waves 1200-1990 AD
[#!: path_insertIn_fHand (link d_webWork 'fin Head_two.html') fout ;
[#!: path_executeEmbedsInsertIn_fHand (link d_webWork 'Menu.html') phraseValueList ;
[#!: path_executeEmbedsInsertIn_fHand (link d_webWork 'Menu projects.html') phraseValueList ;
[#!: path_insertIn_fHand (link d_webWork 'normalStatus.html' ) fout ;
Wilhelm Abel, David Fischer - The Great Pricing Waves 1200-1990 AD
"... History never repeats itself, but it rhymes. ..." (?John Maynard Keynes?)
Table of Contents
NOTE : The model here is DEFINITELY NOT suitable for application to [trade, invest]ing! It's far too [early, incomplete, immature, erroneous]! See Steven Puetz's http://www.uct-news.com/ for the investment side of the UWS.
Model results
"SP500 & DJIA indexes back to 1872"
Linear semi-log segments [1872-1926, 1926-2020]
Fibonnaci levels versus log-levels
Larger Fibonnaci's at greater timescales
How do I get negative numbers from a log transformation of the data?
What's [wrong with, missing from] the current model?
- The data sets :
- It is often a [huge, perennial, misleading] mistake to represent real data as smooth curves. [Averaging, filtering] ruins data to some extent. So I keep doing it anyways... snspot cycle is a perfect example
- So far, I have only used one source (David Fischer), but there is a huge amount of global information on this subject of pricver history. As Fischer and othehave stated, most historical data has been pricing!
- I had really wanted to extend back to ancient primary civilisations [Sumer-Babylon-Akkadian, Harrapan, Egyptian], secondary [China, ...], tertiary [Persian, Greece, Roman, Omec, etc]. Oh well.
- Data [quality, resolution] is variable over time
- A metric for price over time is always problematic
- Data processing probably involved [averaging, smoothing, filtering, regional biases], and I have not looked into the sources
- There are huge transitions in the [nature, driving forces] of economy. First of all, technology, then globalisation
- I have selected only a subset of the [dimensions, regions] of the datasets (pricing mostly of agricultural goods, not [wages, rents, land prices, etc])
- The modern context has chaano much, it sems to me that the price [revolution, equilibrium] context is not so useful, even though it's go keep in mind. Eventually, we may hit the wall, but with personal choices radically changing families, it is unclear if the dominant population driver will play the same role, even though it still seems to be a major variable?
- [photos, digitization] of graphs from book
- Cellphone photos were distorted by curvature of pages of the open book
- "peak shaving" (high & low) occurred with the digitization process, and I did NOT manually correct this! (it is simple to do)
-
- Simple linear regression of pricing periods
- The time-boundaries for pricing [revolutions, equilibria] are a fuzzy concept to begin with, and a bit artificial. I did NOT try to [improve, optimize] this, as it's a red herring at this very early stage of this project. One fun case is my use of 1940 as a time-boundary instead of 1926. Food for thought, even thous it looks like a violation.
- Several time periods have [clear, strong] curvature is not accomodated by linear regression. At this time, I have zero interest in playing with non-linear [transitions, periods], as there are far mre important issues at hand. I would far rather put time into fractional order calculus and spiking neural networks.
- The discontinuities across time-boundaries is very clear with the plot of grain prices, as I used the time-boundaries for consumables prices. It would be easy to adapt time-boundaries for each dataset, but that also opens up issues.
-
-
-
-
-
- Multi-fractal (Benoit Mandelbrot) and Fractional Order Calculus (Gottfried Liebniz) approaches
- Oh well, no time for this now...
- non-linear frequency splitters etc etc
-
-
-
-
Are [political, economic, philosophical, psychological, sociological] theories unimportant?
The easy answer is "YES", as these seem to pre-occupy everybody, and everybody seems to have their own ideas (which are almost invariably other people's ideas, which are almost invariably of origins in antiquity) to impose on others.
But a justifiable interpretation of the price revolutions, and the analysis of BOTH [Alfredo Pareto, Benoit Mandelbrot] that wealth distributes are the same across societies (slopes can differ), is that our "brilliant ideas count for far less than our egos tell us. This seems be a particular problem with intellectuals, perhaps because of their very [nature, priviledged positions].
While ideas seem to be important, it's easy to forget that they also can cause massive problems, and there is little indication that we can think this through reliably.
I almost suspect that with progress, machines may better derive models and concepts than humans, which might be better based on data rather than "conclusions-driven bias" of [fragile, incomplete] human intellect? In many cases, they could hardly do worse.
And for the future?
Picking up a point from the section below "Linear regression of price [revolution, equilibrium] segments" :
We don't know [how, when] the "modern price revolution" will end, so I've merely used the last data-point. It's probably going to be bad, but as Fischer states, the impacts on people of crashes from price revolutions have become FAR less traumatic over the last eight hundred years. I suspect that this is due almost entirely to technology and changes from local-to-global trade. These have generated increasing "better than the pharohs" wealth, far greater stability.
- Perhaps our ego-centric assumed [business, social, political, rural-to-urban] brilliance is merely [driven, afforded] by those factors?
- Will the "modern price revolution" continue as long as technology continues to advance in a manner that generates greater wealth or is there some kind of [resource, environmental, over-population] limit to that?
- Will under-population become a main issue soon, as the very nature of families change with the [desire, aspiration]s of the population?
- Will "hybrid" [human, machine] living, plus increasing levels of the "economically competitive cognitive level" also [drive, force] changes in human nature?
- It seems that the largest modern [mythology, rigion, science] is socialism, and creating a "better" human is at the origins of much socialist thinks (including eugenics).
???
Was 2007-2009 the modern equivalent to the 1929 crash?
Was 2007-2009 the moderen equivalent to the 1929 crash, less the socialist abuse of the free market?
??MacKay?? reference
??MacKay??'s commments are certainly unconventional, but merit closer attention.
Key points :
How can the Great Pricing Waves be correlated with 'dead' system?
Inspirations
Inspired by many historians and financial market analysts, most notably but not limited to (in order of appearance on the world stage) :
Ibn Khaldun "The Muqquadimmah"
?? Tijchevski? - 10 year cycles & economic activities
?? Kondriatieff - ~50 y (Mayan) super-cycles
?? Elliot's waves
Arnold J. Toynbee "A Study of History"
- (the full 10-12 volumes, I was missing one or two, now I have only the abridged two-volume set
Ivanka Charvova - best concept I've seen of solar activity over 5-10 ky
Steven Yaskell and Willie Soon "Grand phases on the Sun" - wonderful descriptions of the Maunder grand solar minimum and many scientists and others
Harry Dent - the most successful 10-40 year [economic, financial] forecaster that I am aware of
Robert Prechter's Socionomics, the first quantitative Sociology
Peter Fischers "The Great Wave" - I forget which author (probably Stephen Puetz who used some of the data) mentioned Fischer's book, but it is a real gem, built on the meticulous work of numerous historians. I consider it to be one of the great historical works.
Stephen Puetz, and especially co-authors [Glen Borchardt, Andreas Prokoph, ??],
- and the system of 20+ Mayan calendars
Ray Dalio of ?BlackRock" Hedge fund "??" - nice historical analysis from the point of view of a financial industry leader
References
David Fischer does a superbv job of listing and discussing a huge number of references on historical pricingg and the issues that arise!!!
I need to somehow put a lot of that here! (huge work, as Optical Character Rcognition (OCR) is often problematic for me, even though it was pretty good at work)
Data, Methodolgy, Applications, Software
"$d_Fischer" = "http://www.BillHowell.ca/economics, markets/Fischer, David 1996 The Great Wave/"
for the internet, but of course it is whatever directory path that you have downloaded the files of this webPage to on own system, if you do download.
Data sources
Primary :
David Fischer 1996 "The Great Wave, Price revolutions and the rhythm of history" New York, Oxford University Press, ISBN 0-19-505377-X
I have NOT [purchased, read] :
Wilhelm Abel 1935 "Agrarkrisen und Agrarkonjunktur: Eine Geschichte der Land und Ernährgswirtchaft Mitteleuropas seit dem hohen Mittelalter" Hamburg & Berlin 1966 appendix https://de.wikipedia.org/wiki/Wilhelm_Abel
I have NOT yet used :
https://historicalstatistics.org/
many links to other data sources
https://thehistoryofengland.co.uk/resource/medieval-prices-and-wages/
Modern themes that illustrate this best :
- Stephen Puetz - Universal Wave Series (kind of like the system of 20+ Mayan calendars)
- Ralph Nelson Elliott
- Robert Prechter
Methodolgy
Photograph paper graphs :
Collect digital images of graphs in the book
- cellphone photos - using drill press in my tool shop to [raise, lower] cellphone
- clamps - to reduce curving of book pages when photo-graphing
- heavy-duty lamps for lighting - unfortunately not my "natural light" fluorescent, so pages are yellow-tan
- USB transfer of photos from cellphone to computer
- photo-files renamed using Figure [numbers, titles] adapted from the book
GNU Image Manipulation Program (gimp) - Image preparation for each curve of each graph :
GNU Image Manipulation Program (gimp) isn't the easiest, most intuitive software, but it's the kind of initiation-by-fire that later leads to you being happy that you didn't stick with lesser [image, video] software. I cannot provide a users' guide here, but I will make [random, scattered] comments on the process. Because gimp [wording, processes, menus] can be a bit "strange" it may help to have this as a guide (or not). Of course, you may be using an entirely different image processing sare, in which case my instructions are useless for YOU. But these instructions are actually for me, as there is no way I will remember which [software, tools, techniques, how-to] even six months later unless I use applications often (not usually the case).
gimp - [rotate, crop] graphs from each cell-photo
- From [nemo, dolphin, other] file manager, open one of the cellphone photos in gimp.
- gimp menu -> View -> Zoom -> Fit image in window
- This allows you to see the full image, so you can easily select the graph in the next step.
- gimp menu -> Tools -> Selection tools -> rectangle select
- left-click-hold-down at one of the "outer corners" of the graph, drag the mouse to the opposite corner
- release the mouse button, then keyboard "Control-C" to copy the rectangular-selected graph
- gimp menu -> File -> Create -> From clipboard
- a new image window will pop up
- if the new image is NOT good, just close the window without saving, and go back to select the region again
- gimp menu -> File -> Save
- if the new image is good, save the window to the filepath of your choice (normally your working directory for this project). IMPORTANT : the save defaults to gimp format (.xcf), and leave it that way so that you can use gimp to modify the image. Once finished - you can export to [jpg, png, other] formats.
- Make copies of the cropped graph image for each curve that you will extract (assuming that the curves [cross over, include [dotted, dashed, otherwise] discontinuous curves, approach closely]).
gimp - remove [adjacent, cross-over] curves for each curve-file :
This step ensures that in the next section, the digitizelt graph curve digitizer, will capture most, if not all, of the curve automatically with one attempt, which can save you a lot of time and frustration.
- From your file manager, open the curve-graph-file in gimp
- gimp menu -> View -> Zoom : Zoom in on a section of the curve at a scale that is easy to work with.
- Select the color of the curve:
- gimp menu -> Tools -> Colour picker
- Place color picker over a segment of the curve, left-click. This selects the current color of the curve. You can, of course, select a different color, but you must be sure to re-color the entire curve!
- My advice - ALWAYS write down the RGB values of the colors!! This is important for possible color operations later. Furthermore,p sometimes uses 0-255, and sometimes 0.0-1.0 (easy to convert as simple proportions). Oops - now I forget how I found the RGB values (later, some day, I will add it here...). Hint?? :
- gimp menu -> Windows -> Dockable Windows -> Tool Options
background color : | Scanned curve | Price equilibria |
Red | 0.808 | 206 | 0.752 | 192 |
Green | 0.627 | 160 | 0.922 | 235 |
Blue | 0.486 | 124 | 0.215 | 55 |
- gimp menu -> Tools -> Selection tools -> Free select (lasso icon?)
- I usually prefer this with the eraser, rather than the eraser alone, for removing other [curves, text] from the curve of interest. The eraser jumps around to much, and is too crude to be efficient in most cases, but sometimes it is the best to use.
- Left-Click at a "clean end" of a segment of the curve of interest, but just at the side of the curve so as not to erase it.
- Release the mouse button and go to the other "clean end" of the curve of interest (or way off it to clear an area), and click again.
- Staying on the same side of the curve, click a perimeter that it sufficient to clear that [part, crossover] of the curve.
- Finish the perimeter clicking at the initial starting point.
- Erase everything within the perimeter by fat-pencilling over it with the right color.
- gimp menu -> Tools -> Colour picker
- click on a clean area of your graph that has the same background color that surrounds your curve.
- gimp menu -> Tools -> Paint tools -> Pencil
- I set up my mouse wheel so that "Alt-key + mouse wheel up-down" either [increases, decreases] the pencil's width.
- gimp menu -> Edit -> Preferences -> Input devices -> Input controllers :
- "Main Mouse wheel" should be in the "Active controllers" right-hand side. If not select "Mouse wheel" in the "Available controllers" list and move (right arrrow in between the lists) to "Active".
- double-left-click on "Main mouse wheel" brings up "Configure Input Controller " window
- General : shows "Main mouse wheel"
- Select "Scroll up (Alt)", click "Edit" button : I set this to "tools-valu2-increase"
- Select "Scroll down (Alt)", click "Edit" button : I set this to "tools-valu2-decrease"
- set a fat width to erase a wide swath.
- I use a big eraser, but I'm probably doing this wrong. I just want to bucket-fill with a color of choice, but I've [forgotten, lost instructions] of what to do as it doesn't work simply.
- The eraser's action is limited to your free select area, which is exactly what you want, as a mistaken motion won't erase your curve or other important areas of the image.
- Repeat "Erase everything within the perimeter" all along the curve of interest
- This must "free" the selected curve from all potential interference.
gimp : curve, make [continuous, mono-color]
- As in the previous section, select the color of the curve:
- gimp menu -> Tools -> Colour picker
- Place color picker over a segment of the curve, left-click. This selects the current color of the curve. You can, of course, select a different color, but you must be sure to re-color the entire curve!
- My advice - ALWAYS write down the RGB values of the colors!! This is important for possible color operations later. Furthermore,p sometimes uses 0-255, and sometimes 0.0-1.0 (easy to convert as simple proportions). Oops - now I forget how I found the RGB values (later, some day, I will add it here...). Hint?? :
- gimp menu -> Windows -> Dockable Windows -> Tool Options
background color : | Scanned curve | Price equilibria |
Red | 0.808 | 206 | 0.752 | 192 |
Green | 0.627 | 160 | 0.922 | 235 |
Blue | 0.486 | 124 | 0.215 | 55 |
- Mark the path of the curve (at least the parts that need to be filled in with the right color) :
- gimp menu -> Tools -> Paths
- left-click at one end of the curve, and click at each "turning-point"
- gimp's Bezier curves allow you to follow curved sections (see instructions)
- once you have accurately set the path :
- gimp menu -> Edit -> Stroke selection ->
- set [stroke line, solid color, anti-aliasing, linewidth >=3 to make it easy for digitizelt to follow the curve]
- left-click "Stroke" button
- Save the file in native gimp .xcf format (important for changes later to avoid re-doing work)
- Export the file in [jpg, png, whatever] format :
- gimp menu -> File -> Export As ->
- Bottom right corner "All export images" drop-down list : select "jpg"
- set the proper [directory, filename], and change the file extension to match te format that you want!
- Click "Export"
- Adjust quality of output image if desired
- Click save
Digitize graph curves using https://www.digitizeit.xyz/ software
- I selected this software from a huge selection, almost randomly
- minimum requirements - [try it before you pay, modest cost (~50 $US), command-line use
- (30Jan2021 I haven't check the extent to which this can be run directly from scripts, as I am still learning the basics of software.)
- quick trials - [simple, fast to get going, effective]
- I am very happy with it
- In the recent past (my Puetz mini-project), I think I used https://automeris.io/WebPlotDigitizer/
- I couldn't [remember, find it] even though I may have paid for it. This is typical for me, losing [track, memories] only a month or two after using it. I found it soon after buying digitizelt, stuck with digitizelt.
- Digitizelt instructions are simple, so I won't comment.
- Key image preparations!! :
- Make separate copies of each graph for each curve that you will extract IF the curves cross over or come into close proximity.
- NOTE : I have not looked into scripting of digitizelt.
- To launch :
- $ java -jar "$d_SysMaint""digitizelt graph from image/DigitizeIt.jar"
- Note : "$d_SysMaint""digitizelt graph from image/" is the directory where I save the digitizelt system.
- Digitize a curve :
- Digitizelt Menu -> Axes -> [X, Y] axes [minimum, maximum] - simply click on the right spots on the graph axes
- Digitizelt Menu -> Axes -> [X=linear, Y=logarithmic]
- Digitizelt Menu -> Auto -> Options -> Automatic digitizing window
- Max. RGB differeence within the same object (marker or line): I set to 40-80% (possibly lower end with good-quality curves?)
- Symbol matching in % : I haven't used symbols yet, so leave it be
- Max. number of digitized points on line : I leave at 10000
- Digitizelt Menu -> Auto -> Find Line
- If the curve is of good quality and easy for digitizelt to follow, it's possible that the entire curve is automatically digitized.
- Sometimes you may have to save data from each segment of the curve as it is digitized. If you export the data for each segment to a file, then if something goes wrong you won't lose the segments that you have already done :
- Digitizelt Menu -> File -> Export ASCII -> [directory, filename]
- Once all segments have been save to file, combine and sort the data, example :
- $ d_gimped="$d_PROJECTS""References/Fischer, David 1996 The Great Wave/gimped/"
- $ echo "year,Fig 0.01 consumables" >"$d_gimped""Fig 0.01 Price of consumables in England 1201-1993.dat"
- $ cat "$d_gimped""Fig 0.01.dat" "$d_gimped""Fig 0.01 2.dat" "$d_gimped""Fig 0.01 3.dat" "$d_gimped""Fig 0.01 4.dat" "$d_gimped""Fig 0.01 5.dat" | sort >>"$d_gimped""Fig 0.01 Price of consumables in England 1201-1993.dat"
- This data is now ready (comma-delimited) for gnuplot.
- Digitizelt Menu -> File -> Export ASCII -> [directory, filename] - the data to a file, if this has not already been done.
- sort the data, especially as curve segments will disorder the data (not essential, but easier to read). Use an invert sort when dates are BC (negative years)! (use the spreadsheet to sort if data crossed [BC, AD])
IMPORTANT issues with digitization of curves
Digitization of curves is not without its issues, and this is NOT limited to https://www.digitizeit.xyz/ software, although different software may be [more, less] robust with respct to these problems. Digitizelt probably has settings that can be adjusted for particular datasets, but I have not looked for that in my first use of the software (later, maybe much later...). In a nutshell, the digitization must be [robust, accurate] on the whole, and compromises are required in the software. "Peak shaving" (high and low) occurs, and high frequencies tend to be smoothed out, and these can be critical to some applications.
- The axis are typically at angles, with curvature from the cellphone photo. This causes minor distortions in the data.
- It is a very simple process to go back to the dataset and manually correct for "peak shaving", as only a small fraction of the data is affected (usually). But I haven't done this.
- Retention of high frequencies can be much more manual work, depending on the curve.
- I end up with a ridiculous number of "data poinst", as each line segment in the original data can have humndreds of "false data" points! This can be removed, with work, and will ultimately be necessary if the spreadheet grows too large (it's already inconvenient). "Jitter" of digitized points on a straight line is observable.
- Fischer's Appendix H "Nominal prices and silver equivalents" provides a very useful discussion of [how, why] historical studies sometimes correct for monetary fluctuations, especially currency debasements. Data is sometimes reported on a "nominal" basis, and there is likely an issue with the [stability, inflation] of the metric (eg dollar in modern times, grains of silver or gold as mining [reserves, costs, availability] change).
- Never forget that the preparation of the data by the original authors also will involve simplifications, and also involves data [filtering, smoothing] techniques. Regional variances within a country a also important, as pointed out by Fischer. I have NOT checked the sources for these issues.
I have NOT done this simple work! Yes- laziness creeps in, but it's just an initial stage of the project, and this can be done later (yeah right, but will I do it?)
Warehousing graphical data
"$d_webRawe""economics, markets/Fischer, David 1996 The Great Wave/1_Fischer 1200-2020.ods" :
-add sheet for each graph-curve
I typically use a LibreOffice Calc spreadsheets to [collect, rearrange, simple transform] data. For this project : 1_Fischer 1200-2020.ods
For gnuplot, it is most convenient (albeit not necessary) use comma-delimited (csv-like) text data files. Just save the spreadsheet in that format, so now you have a data warehouse with the [convenience, power] of spreadsheets, and simple output usable by gnuplot.
Set timePoints for transitions between price [revolutions, equilibria]
The [first, key] step is to select the timePoints for transitions between price [revolutions, equilibria]. The endPoints of price revolutions was taken at the point wherrices stabilised following a price revolution "crash" and tended to drift lower slowly. Price equilibria endPoints were taken to be where significant crease first occurred, and the subsequent price revolution begins, somewhat slowly.
These timePoints are somewhat "cloudy", and while I kept "close" to dates as usually specified in Fischer's book, I didn't take these as exact. Perhaps the actual price data are a better reference. I used the "consumables prices" as a basis for other curves (eg grains) for the medieval to modern periods, but it's br to treat each curvce separately.
This is susceptible to serious bias in selecting the [start, end] dates for each segment. See the spreadsheet 1_Fischer 1200-2020.ods.
For example, I initially selected the following end-points fREngland's consumable prices :
year 1206.58 1373.55 1501.78 1661.06 1754.20 1803.78 1937.77 1999.74
The year ~1940 was taken as the [start, end] point for my 1872-2020 S&P 500 stock index [start, end] point, so I use it here as well.
We don't know [how, when] the "modern price revolution" will end, so I've merely used the last data-point.
Do single-linear-regressions of price [revolution, equilibrium] segments
This is easy with the spreadsheet - one column of regression results I use 10 year intervals per segment, but you only really need the [start, end] dates [-,+] 20 years. The extra 20 years extends the segments at both ends for visual clarity. For an example, see the spreadsheet 1_Fischer 1200-2020.ods, sheet "Fig 0.01 SLRsegments".
Save the "SLRsegments" to a data file that canused by GNUplot. Example : Fig 0.01 line segments for GNUplots.dat. Notice that olumn titles can use free-format text, except for the comma, which separates columns.
It would greatly reduce annoyance if this entire step was automated.
I used my simple Qnial programs to do the linear regression :
- Save data of 1_Fischer 1200-2020.ods to data file , example Fig 0.01 linear regression raw data.dat
- For QNial the data in rows must be separated by tabs, and the column titles must : be separated with a quote ("), not include [spaces, punctuation, strange characters] (use underline (_) as a substitute).
- For each curve, Fischer linear regressions.ndf - a special operator (proceedure) is created to select a segments's dataFile, handles [data, results] file [input, output], and calls fit_linearRegress.ndf
- Copy the [constant,slope] for each segment to "1_Fischer 1200-2020.ods", sheet example "Fig 0.01 SLRsegments".
- "Fischer linear regressions.ndf" saves the estimated ression results, Yest, results to a text file, which is then copy-pasted to "1_Fischer 1200-2020.ods"
- In turn, "1_Fischer 1200-2020.ods" data is once again exported in csv format (including the Yest results) that GNUplot can use (I could also use tab or other delimited, not just commas).
- Note that the use of the spreadsheet is not essential and does require "double-handling" of the results, but it does provide a nice "data warehouse", amking it far easier to follow results especially years after doing this work (or if someone else wants to use it).
GNUplot - create graph of nominal data
30Jan2021 - I'm too lazy to provide details at this time, but the good thing is that I used gnuplot scripting, so this part is perfectly well described by scripts (in this web-directory) and the datasets. Examples are :
For this project I did not write bash scripts that create the gnuplot scripts, but I've done that in the past. It's a life-saver for repetitive work.
[calculate, plot] detrended results for each time segment
During the step "Do singular linear regressions for each time segment" described above, the QNial program generates Yest values for each digitized point. These values are copy-pasted to the "data warehouse" spreadsheet, where a simple formula calculates detrended values.
Note that I calculate 10* the detrended log values, which was used a previous project for the S&P 500 stock market index. From that community I have adopted a [fractal, Fibonacci] scaling of the outputs on a [-5.7] interval. Fibonacci levels (and related Elliot waves) are very widely used in technical analysis for trading, and I apply it to the historical series as well.
The main detrended plots are defined in GNUplot files :
Simple template for working on a dataset
08********08
ddmmmyyyy Template for data analysis steps
1. copy section to new date near start of this file
2. copy-replace "Fig 5.02 ancient Greece [barley, olive oil] 450-150 BC"
......with the new photo-image-curve name
3. careful to update information as you work
NOTE :
d_gimp="$d_PROJECTS""References/Fischer, David 1996 The Great Wave/gimped/"
d_Fischer="$d_webRawe""economics, markets/Fischer, David 1996 The Great Wave/"
However, most applications REQUIRE the full path!!
+-----+
gimp : crop graph, curve [separate out, make [continuous, mono-color]]
/media/bill/Dell2/PROJECTS/References/Fischer, David 1996 The Great Wave/gimped/Fig 5.02 ancient Greece [barley, olive oil] 450-150 BC raw image.xcf
/media/bill/Dell2/PROJECTS/References/Fischer, David 1996 The Great Wave/gimped/Fig 5.02 ancient Greece [barley, olive oil] 450-150 BC raw image.jpg
+-----+
digitizelt : digitize graph curves using https://www.digitizeit.xyz/ software
$ java -jar "$d_SysMaint""digitizelt graph from image/DigitizeIt.jar"
/media/bill/Dell2/PROJECTS/References/Fischer, David 1996 The Great Wave/gimped/Fig 5.02 ancient Greece [barley, olive oil] 450-150 BC raw data.dat
Use an invert sort as number are negative! (use the spreadsheet if data crossed [BC, AD])
$ sort --reverse "$d_gimp"Fig 5.02 ancient Greece [barley, olive oil] 450-150 BC raw data.dat" >"$d_Fischer""Fig 5.02 ancient Greece [barley, olive oil] 450-150 BC sorted.dat"
Add column titles : must be QNial phrases - replace [space, punctuation]s with underscore `_
+-----+
QNial : process digitized data
add_logCol link d_Fischer fNameBase ' sorted.dat'
paste to : graph-curve sheet "Fig 0.02"
column titles : must be QNial phrases - replace [space, punctuation]s with underscore `_
column separator : tab
copy log formula down "log_price column
copy : titled-data ["i_dat "year "Fig_5_01_prices "log_price]
to : "$d_Fischer""Fig 5.02 ancient Greece [barley, olive oil] 450-150 BC, QNial format.dat"
+-----+
Set timePoints for transitions between price [revolutions, equilibria], Linear regression of price [revolution, equilibrium] segments
onset timePoints :
equilibrium revolution equilibrium last_data
-1796.061 -1739.991 -1684.230 -1624.393
+-----+
Do singular linear regressions for each time segment
Use
"$d_Fischer""Fig 5.01 linear regression raw data.dat"
"$d_Fischer""Fig 5.01 prices in ancient Babylon 1850-1600 BC.dat"
+-----+
GNUplot - create graph of normal data
"$d_Fischer""Fig 5.01 prices in ancient Babylon 1850-1600 BC.plt"
+-----+
[calculate, plot] detrended results for all curves analysed to date
"$d_Fischer""1850 BC to 2020 AD prices detrended.dat"
Software tools - general comments
Here are the key [image, graph] tools that come to mind :
My basic toolset for much-of-what-I-do :
- [Unix, bash] cart [environment, command, script]s
- example commands : [find, grep, sed]
- piping, I/O redirection, etc, etc.
- [geany, kwrite] text editors for programmers, especially :
- regular expression [search, replace] - How could I have been so slow to [learn, adapt]? Years of wasted effort and stupidity!!! I should have listened to friends.
- "vertical" editing across lines. I will NOT work without that anymore!
- Queen's University of [Kingston, Ontario, Canada], Nested Interactive Array Language (Q'Nial)
- LibreOffice Calc spreadsheet program - This is equivalent to Microsoft Excel, and is fairly compatible other than scripting. Word is a very high quality application, no doubt, but it costs far too much (forget their bozo home editions) and it belongs to one of the evil empires (another evil empire is Apple, but there are plenty more).
- gnuplot - I realize after far too many decades that I have been a complete moron for using the graphical capabilities of spreadsheet programs. Far too often the use of spreadsheets themselves is also moronic, for example when text files can be far more [open, powerful, effective], but spreadsheets are great data warehouse, allow far [faster, easier, ad-hoc] transformations of data than any other means, and they do the best job of making data visible and transformable (up to some data [dimensionality, size] when visual becomes clumsy or problematic).
My requirements for software :
- Must be able to run entire program from a bash script or any other programming language of your choice, like Q'Nial that I also like, or as other examples C, forth, LISP, python, pearl, java, whatever...
- Cheap - like free...
- Graphical User Interfaces (GUI) are great when you are just starting to learn software and it's capabilities. After that, and apart from many specialised tasks where GUI shines (like gimp), GUIs are just a pain in the ass and get in the way of doing real work [en mass, fast, of quality, reliably, that are [transferable, general]]. I imagine that many people require GUIs because they suck at scripts (i.e. myself not so long ago), or when they use the software only rarely, with plenty of time between uses to forget everything. In that later case, scripts are a vastly superior approach, except of course when people are not familiar with scripting.
??Jan2020 initial posting
[#!: full-line executeEmbeds, phraseValueList = (("fout fout)("backtrack backtrack))
[#!: path_executeEmbedsInsertIn_fHand (link d_webWork 'fin Footer.html') phraseValueList ;