Review: Ancient Greek Warship: 500–322 BC

Ancient Greek Warship: 500–322 BC
Ancient Greek Warship: 500–322 BC by Nic Fields
My rating: 2 of 5 stars

While I find this a good introduction to Athenian ships, I find the book does a less good work on actually fulfilling its promise on discussing “Greek” ships. Overall, the ships’ military performance is not very well assessed with Corinth and Corcyra not mentioned except in a few short paragraphs. However, speaking historiographically, some other conclusions Mr Fields made sound more like conjecture than actual science, and I feel that quite a few other books are a better look at Athenian triremes (which is invariably the city and ship this book focusses on) and at least do not pretend to deal with other topics.

I will briefly mention the other topics: Mr Fields describes “positive buoyancy” as the main reason why triremes have not come through the ages, while it has always been my understanding that the salinity of the Mediterranean along with the biological organisms there (vis-a-vis the low salinity Baltic Sea and the oxygen-deprived Black Sea, for example) are the main reason why wooden ships in the area have not been well preserved (in general).

Secondly, Mr Fields also makes it sound as if the trireme was the cause for the Athenian defeat in the Syracusan Expedition, and that is not quite how I would read Thucydides (who admittedly is not the most unbiased author). There are some other similar claims made about other battles along with not mentioning some of the most famous Themistoclean statements on triremes which one should consider a mainstay of any book on Athenian navies. Also, I find Mr Fields’ incapability to not refer to Athens as an “empire” quite poor, especially where in a book of this length accuracy of statements should be paramount (hence, “Athens and Her Allies”…).

Lastly, Mr Fields says that “control of the seas in the modern sense was impossible for a trireme navy”. This could be the beginning of a discussion longer than any worthy of this post here, but in short, I think he is wrong. I think that conceptually war had a different purpose in that time and age, and no one even thought of a “control of the seas” à la Mahan.

The illustrations, however, are superb as ever.

View all my reviews

What was ‘today’ in Eastern Rome?

The question “What was a person thinking of as ‘today’?” in the Eastern Empire, as I asked over here on History SE, can have several possible answers depending on the era we live and the general circumstances at play in the imperial realm.

The simplest idea that the Western world has of the Ad Urbe Condita (from the original founding of Rome) is mostly an earlier, Principate, fiction that was used more commonly for ‘official’ dates than accurate timekeeping. Marcus Terentius Varro’s work–the author of the presently accepted calculation for the founding of the City–was accepted as gospel by Claudius for propaganda. Hence, it is unlikely many people in the empire ever thought of their present day in terms of how long after the founding of the City it was, lest it was a celebration of some kind, and I have no reliable information of these being continued in Byzantium/Constantinople.

The other early form of timekeeping was consular offices. In the Republic, it was common for years to be known as the “Year of Consul 1 and Consul 2”, in imitation of regnal years. Justinian I, however, abolished the practice of annual consuls. With the influx of repeating consular years (‘the first consulship of …’, etc), this method must have been more for recordkeeping rather than timekeeping in ordinary life. Similarly, when consuls were not appointed, the years were given in terms of how soon after an established consulship these took place. I cannot imagine many people thinking in these terms either, especially given the following few options.

The official calendar of the Empire (Etos Kosmou) between 988 (the 28th Year of Basileios II’s reign) and 1453 was the Creation Era, dated backwards to start at 1st September, 5509 BC. While 988 is when it was adopted by the Imperial government, earlier usage for religious purposes within the church was common ever since the 7th century. Local offshoots and earlier versions of the Etos Kosmou, such as the Alexandrian Era, existed at times, but would not have been as common throughout the empire (not to mention that Alexandria was lost forever in 641 AD, not including the occasional reconquests in the decades after).

Three of the more important methods of timekeeping have not yet been covered. These are the Julian calendar, regnal years (mentioned briefly above in relation to consular years), and the Indiction. The regnal years clearly must have been an important part of most peoples’ lives within the empire, especially as they followed previous Hellenic traditions of the eponymous archon (of Athens). Therefore, I would say foremost that most people always knew in what year of their Emperor’s reign they lived in.

This answer requires qualification: after 988, with an official reckoning adding power to the Church’s, it is not impossible that many people thought in both systems, but especially in the Church’s version. This is likely to be the case especially in periods where the Eastern court politics saw a variety of people take the throne in quick succession, in which case most provinces may have even been quite unaware of any changes in leadership. This is additional to Justinian I making the use of regnal years mandatory in 537 AD.

The Julian calendar would have been more common in some provinces, no doubt, especially due to the prevailing church influence. Hence, it is not unlikely that especially in the 4th to 7th century, plenty of people would have reckoned their time (year at least) based on Gaius Julius Caesar’s re-alignment of the traditional Roman calendar. However, it seems this gradually fell out of touch both with the increasing use of the Etos Kosmou as well as regnal years and the Indiction cycle gaining traction.

The Indiction was a 15-year cycle which also began on September 1st. Again it was Justinian I who decreed that all documents must be dated in this system. Indeed, based on the nature of this system, I feel that the vast majority of people would have been most familiar with this system. Even if the emperor’s name had changed, the tax collector would probably arrive on time. Hence, I think that for the duration of the Eastern Empire, this would have been the most likely answer to get from the majority of people, the Etos Kosmou being the second at least after the 10th century.


This is a copy of my own answer on History SE; an answer which I put up as I felt the present options there did not sufficiently answer my query. In a hopeful future, I will carry out more research on this topic to determine better sources and a more consistent narrative.

Solar Flares: A Brief Look into 2D Modelling

Though I am trying to pass the one-post-a-year thing by, it seems that it shall continue for a while longer. Meanwhile, let me tell you that last semester amused me greatly with respect to the lecturers allowing us to choose our own project to model. So, I went all out and thought of the most hideously complex thing I could. Magnetohydrodynamics in a solar flare (or rather, a solar flare including the magnetohydrodynamics) was what won the competition, so I set out to think about them in some detail — not too much detail though, since that would be well beyond me. Indeed, I was quite happy keeping the level of detail rather obscure and low.

However, what I found was good fun all round. Previously, for some reason I had thought that solar flares are rather well known. Now, I am far better educated — indeed, we know so little it is amusing how we do not strive to know more. After all, flares (and coronal mass ejections) control so much of our climate (even if on a ‘short term’ basis). But then, were that question up to the scientists we would probably have a network of satellites around every celestial body in our star system, measuring as much as we can and enjoying the constant influx of data.

I will keep today’s introduction short (having planned to write it since the very beginning of October), so I shall only continue with a brief description of the modelling methods I used. Namely, since solar flares take place in an environment that is very difficult to directly observe, the majority of our models are tested based on incomplete sets of observations. These models therefore can be of varying degrees of complexity, with the easiest division lying between 2D and 3D models (where 2D actually implies a 2.5D situation). These again subdivide, but I shall not go into that (this time round).

Magnetic reconnection is a term which needs to be introduced before all that. Magnetic reconnection has been described in many ways, but as it is relevant to the flares, it should be understood as the process in which magnetic flux lines break apart due to plasma stresses and other factors (the majority of which are not known) and then later reconnect at some other point in space and time. This reconnection is measured (calculated and modelled, that is) by a value that is dimensionless, and which is known as the rate of magnetic reconnection. The 2D models rely on calculating this rate, and then comparing it to observed values to assess the model’s degrees of accuracy.

The 2D models of the simplest construction were first created by Mr Sweet (I would not dare guess whether he was a Professor or a Doctor). Soon, corrections were suggested by Parker, and this model is known as the Sweet-Parker model. Their model is generally found to be too slow to accurately model the magnetic reconnection that goes on in the flare. The approximations that are made allow it to be one of the easier models to be used to study flares though.

Soon after, a slightly more complex model was created by Petschek. The Petschek model is generally considered to be more accurate, achieving rates for magnetic reconnection that are closer to the the observed values than the Sweet-Parker ones by a few orders of magnitude. The results can also be very accurate, but based on my experiments (inherently flawed in so many different ways) they are not necessarily so.

In effect, it can be said that the assumptions that the Petschek model makes are not inherently more complicated than the Sweet-Parker ones but the results are of a higher degree of accuracy. And that is the thought at which I would like to leave you today.

David Attenborough’s ‘Africa’

The BBC recently finished broadcasting David Attenborough’s new nature series, ‘Africa’. I had the great chance to watch all of it near-immediately (and I got to the last episode far faster than I did with the ‘Frozen Planet’). Now, I get the chance to tell you all what I think.

Firstly, I think that the technological advancements we see in filming are amazing. The starlight camera we see in use with the rhinos is spectacular! I think one really needs to see the scenes to understand what I mean, but if this now proves that it is possible to film in starlight without a noticeable loss in quality… that is good news all round!

Secondly, my favourite episode must have been ‘The Cape’. To begin with, the Cape is a very interesting place in my mind and to see it come to life between the two oceans as it did here was quite breathtaking. I wanted to go there. I still do. The views of the Drakensberg Mountains were good, and I had nothing bad to say to the scenes of those marine birds feeding either. As lances from the sky…

Now, however, all is not brilliant. For some reason it seems to me that Mr Attenborough wishes to be more dazzling than he thinks he is — how else can we explain that he now decided to improve upon the facts in the last episode regarding climate change. Fortunately, the good army of climate scientists was on it and noted that Attenborough’s suggested numbers were not proven by science, but are in fact a bit lower.

Aside from this episode, I can feel sorry for the poor cameraman whose tree where he was perched got battered by that herd of forest elephants. Although he surely must think back to that now, and go: “I think it amazing!”

Maybe there is a sadness in me that the BBC team decided to bypass the Okavango Delta although we made it into the Sudd swamps which are the second major waterland area. It might be that the prehistoric bird there was the item that caused only one of these areas to be featured, but I would have hoped Okavango to be in there.

I was pleasantly surprised by the footage from the Atlas Mountains — I would genuinely not have believe that it could be that… Nordic… in Africa. Maybe it is a very small area, but even so, I can imagine a brown bear feeling very happy in those forests. And if that can be, well, what can’t?

On the Quality of E-Books

Whilst I generally prefer to live a peaceful life of which reading is an important everyday piece, I discover every now and then that there are a number of difficulties with this approach. Generally, everything works well or good enough and I do not have to regret the amount of monies spent or effort put into purchasing and reading books but there are also moments when I wish to say something of what is being done under the near-proper term of “digital publishing”.

Let me start first by insisting though that while the following will be true in a large number of cases, it has notable exemptions and I will bring out at least one that I have seen myself. Likewise, the problem does not exist only in digital books but at least with digital books the solution is simple.

Now, I have mentioned a problem but have not defined it yet. If I may: Customers are paying considerable sums of money for books in digital form for download to e-readers or other devices with similar functionality, and yet the final product that the customer receives is not always presented to them in a final form.

Namely, while in regular publishing there is a certain quality and level of spelling that is expected of anything sent to the press, in the digital word this same quality seems to have disappeared with the publishing houses seemingly content to upload anything without ascertaining its quality.

As the next step, I will clarify my own position: I own a Kindle (and have owned previous Kindles in the past) and I spend a reasonable amount on digital books. Digital reading, or e-reading, certainly forms the majority of books I read these days. I do not mind paying for reading anything that another person has written or published, but I do expect any product I receive to respond to certain standards of quality.

Let me bring a concrete example. Over the last few weeks I have read a number of books by Jack Campbell on my Kindle, all of which were priced between £5.50 and £6.00. This price was accompanied by an explanation that the books were approximately 300 pages in other versions, and that the file which included the book was between 300 KB and 700 KB in size. In other words, a very small file with an average-length book had been priced at the aforementioned sum. I’ll be very clear that had there been nothing else, I would not mind this price for it is clear that the good Mr Campbell needs to make his income from something.

However, there was “something else”. Namely, the books were readable but my enthusiasm decreased as I encountered more and more spelling mistakes and punctuation errors. One would think that a simple spell check can find solutions to problems like that, or that one read of the book can note that a word has been split into several pieces (say, “in def ens ible” comes to mind).

Can anyone say how this is a fair use of the money that the publishing house and Mr Campbell make off the people who are purchasing their products?

I remember that when I first read George RR Martin’s “A Dance with Dragons”, the same issue was present. I also know that is the only time a book on my device has been updated, but since I have not read it again thus far, I am not confident in how much has improved.

To get back at the main issue though: We, the customers, are receiving products that are seemingly at a stage where no self-respecting publishing house would release it as a paperback, and yet we pay a very similar amount as if we were buying a paperback. So, where’s the quality I was expecting?

Do I need to pay extra for the publishing houses to trouble themselves by reading through the works at least once?

What needs to change so that I would be able to buy a final product that I could read in peace?

Are the publishing houses deserving of the money I have paid for the titles if you cannot put in a small measure of effort to make their own creations presentable?

Now, I’ll note that digital publishing is not the only culprit. The one title I have from Forgotten Books’ “Easy Reading Series” is similarly full of spelling mistakes and riddled with bad punctuation, but at least I have a personal copy of the title which acts as a small measure of comfort for the similar price I paid towards it.

I am hopeful that this trend in digital publishing can change — I say this not only with the one example I brought in mind but also remembering a number of other items I have read which have been sub-par. However, I also think that we customers need to be more vocal about establishing some set of standards.

I guess the other option would be to set a price per kilobyte, and then the publishing houses can sell me whatever they want to with me going in there knowing that there is no massive profit lurking for these same entities behind the screen — very much unlike the present situation.

So, how do we go about establishing that what is sold to us is a book that we can read when we purchase it?

And until we have managed achieving some standards of quality, let’s make the issue more public!

Reflection Geophysics

Reflection geophysics, or at least how we apply reflection in the modern day, has made me think quite a bit of how they used to do it back in the day. I have a fair idea how gravity and magnetic anomalies were interpreted and modelled in those dark times, but when we get to reflection and refraction, similar construction models are already quite a bit more computationally intensive.

If anyone does know how the first reflection surveys were modelled and interpreted in the ’20’s and ’30’s, please do inform me, but until I learn some more either through educated guesses or finding it out, I’ll try to bring up what strikes me as the difficult part there, at least when I look at it and see how we do it in the modern day.

Firstly, a number of shots in different points along a line are shot and recorded. I can imagine this recording device being a simple seismograph at some point in the beginning (or a device operating like a seismograph would, with a needle mapping the perturbations).

Secondly, we need to stack these shots, after removing noise. Now this is the difficult part — admittedly, noise removal would not be that complex although I would believe there’s trouble to be had when we’d try to use a seismograph to map a direct arrival vs a refracted/reflected one. But, this could be possible depending on how it is set up. But stacking? Without the ability to automatically add up different shot results from various locations, the method that remains possible is manually going through seismograph records and adding up the measured amplitudes to create an approximated stack (which would be, naturally, wrong in a variety of ways due to the inherent inaccuracy of manually re-graphing data which is not that accurate by itself, but it probably went through a function like this).

Thirdly, once we do have a stack, we should actually be able to base something on it, and try some interpretation.

Or, it could also be that the guys involved knew that the equipment and method was not refined enough and they met the above process somewhere half-way through with a proposed model that they then tried to fit into the ground. Say, for simplicity and ease of explanation that the scientist thought there would be largely four different layers, one of which relevant to the purposes of what he was looking for. He would then, based on the locations of shot-points and shot-gathers, calculate what those shots would look like if the subsurface actually looked like what he thought it looked like.

I am unsure how many tries and tests the second way would have to go through, but I guess that it would truly be part of one of those older things that people speak of when science was half art, and the best scientists could guess and then approximate based on their guesses so that in essence they were just proving themselves correct with data. And if it was anything like this, then I am both sorry and happy that I can use a variety of complex programs for the same purpose. Happy because I have quite a bit easier a life for myself with less chance of going horribly wrong for fifty times in a row; and sadder because the cultivation of such a skill as guessing anything as complex as that must have been very very interesting.

Of the Aurorae

Based on the news and words out there, we just saw the Aurora Borealis in Norfolk. I smiled when I realized that I missed it, but people I know saw it (or a reflection of it). I don’t even know why smile at something as innocuous as that, but I did. And I enjoyed the thought that one day I will look at them (somewhere else, probably) and take the most of the sight.

And I was pleased of finding out it was that.

Artificial

Had a long discussion with a housemate over what constitutes being artificial. I have no idea how we came upon the topic but we certainly raised some interesting issues. Namely, he said that he considers dogs artificial since they were selectively bred by man. I would have disagreed with that if it were not for my wish to continue reading, but I’ll give a short overview of what I consider to mean “artificial” here, since the question in itself is rather interesting.

I would start with the example of art — art, for me, is not artificial. It is not because to create art (draw, sing, write, etc.) there has to be an idea, and that idea is a consequence of thought. I see thought as a natural process, and therefore art is just giving thoughts an earthly form.

An example of an artificial item would, however, be teflon. Yes, sure, there was the idea of the material which turned into its creation, but the defining difference is that for it to become reality, someone had to manipulate the molecular composition to create a substance with new properties. This is a good example of artificialness.

Same is exhibited by the concept of artificial intelligence — it has been conceived to act on its own, therefore past the first act of starting up, everything is a creation of a creation. Therefore, it is unnatural (not that it should not be, but that it is not from nature).

The same housemate brought up the example of some substance that is found both in nature and synthesized by man, with the only difference in the final being that the natural one has a higher degree of purity. To his question of differentiating between the two, I answered that one has been engineered by man while the other is the result of long processes. The processes that have been substituted for a temporally short chemical engineering experiment thereby remove the quality of naturalness, and define the final result as an artificial substance.

While short, I believe that this provides a sort of answer to this very interesting question.

Gravitational Sea-Level Rise

I thought to start this blog with the first scientific post touching gently on the topic of my last essay, on the interactions of the oceans and the ice sheets. I looked more closely at the East Antarctic Ice Sheet, and what I found surprised me. Mostly because it was very compelling, but also due to the fact that I had previously not heard of one type of sea level change.

I’ll include the full references in the bottom of the post, but based on Clark and Lingle’s studies from the 70’s (a paper then largely ignored), Gomez and her research group calculated that the most immediate effect from the melting of the Antarctic ice sheet would be the migration of water away from the southern continent.

Now, when brought up without a reason why it happens so, the process seems a bit odd. However, the physical basis behind the claim seems solid enough : namely, the ice sheet, as a massive expanse of kilometers-thick ice, has created a gravitational field that attracts water. Upon melting, this gravitational attraction of the ice will disappear leading to water migrating away from the Antarctic continent.

The two studies also produced more specific results — namely, the immediate loss of water near the Antarctic would be around 500% smaller than the average sea level rise resulting from the meltwater. The areas that accumulate the most water (I believe the values were around 25% larger than the average sea level rise) due to this migration would be located in the southern Atlantic and central Indian oceans.

 

Clark, J.A., Lingle, C.S., Nature 269 “Future sea-level changes due to West Antarctic ice sheet fluctuations.” (1977).

Gomez, N., Mitrovica, J.X., Tamisiea, M.E., Clark, P.U., Geophys. J. Int. 180, doi: 10.1111/j.1365-246X.2009.04419.x (2010).

On Science

Due to science factoring in on nearly every aspect of what there remains to be done, I’ve decided that I’ll branch out once more and create a hard-science side to this where I’ll try to be as scientific as reasonable, exploring some of the topics I enjoy reading about (along the lines of : benthic mysteries ; ocean – ice sheet interaction ; river dynamics ).

The philosophical discussions of science (as often, or not, as they have occurred) will still be in the jurisdiction of this branch — therefore, the add-on will be meant as a full scientific exploration of some topics (most likely short reviews of what I read and find interesting enough to share).

I’ll try writing something up later today as an introductory remark (most likely about marine ice sheet dynamics unless my current sedimentology papers prove to be an easy read that I can explain immediately).