I suppose everyone has their own ideas on this, and I'm not an economist, I'm probably being very naive but I thought I'd have a go.
On holiday I was reading plato's 'republic'. In here he describes some kind of 'ideal' communist society, where everyone is working for the common good, like worker drones in an ant colony. Lots of staples of communist regimes like censorship, indoctrination etc. Not really my idea of an ideal society. But it does make you think.
Personally I believe that there is a balance to be made between pure capitalism and free markets (as the USA strives towards) and pure socialism (where everything is done for the state, and their is no reward for enterprise). I am not sure where the exact best balance lies, but there are some clues. I saw the movie 'sicko' recently which also examines one aspect of this question. It examines the reality of a free market health care system in the USA versus the more socialist systems in e.g. the uk or france. Ok it vastly overrates the usefulness of our NHS (us brits think it's not up to much), but I'd take that any day over the system they have as portrayed in the US. Medical insurance is fine in theory, but if they really can refuse your care on a technicality then the whole system becomes quite sick really (as with the movie title).
So while I believe very strongly in free markets and having reward for enterprise, I also think certain services are best provided by the state - such as health, police, fire etc.
Currently with the bank crisis I find myself asking some similar questions about the banking system. If capitalist societies rely on banks as their backbones, the foundations upon which society is laid, then the banking system must be stable as a rock. As detailed in the previous post, there are some very serious and fundamental flaws with the current banking systems. While they work very well on sunny days, they really become worse than useless on rainy days.
For something so fundamental I have to propose 2 possible solutions.
One is that the state should run the fundamental banking system for a country (or even possibly the concept of a world bank?). Nationalization - the banks money IS the states money - there can be no more confidence than that I think. And if the state takes on toxic debts from e.g. the US, then it is it's own fault, it takes the hit, the tax payers complain and vote in a more prudent government next time.
The second solution, probably more likely as less radical .. is that the banks, as 'keystones' in the economies of the world, are subject to intense regulation in everything they do. Every loan, debt is recorded and audited and viewable (perhaps even by anybody?) so that their financial books are constantly under scrutiny, much like open source software. And have independent bodies (presumably this is what the FSA is meant to be for) scrutinizing the books and imposing vast penalties for taking on too much risk.
However, as one banking expert pointed out, the problem with the second solution is, that it is easy to spot a risky loan in hindsight, much more difficult to manage risk on a day to day basis. Mind you you can't help thinking that it must be possible to manage risk better than certain institutions were doing (e.g. northern rock).
It is interesting that the banking systems used to be far more highly regulated, but I believe that in the reagan and thatcher years, many of the regulations were removed, and more so by brown and his cronies. Perhaps they were the true architects of this crisis.
On top of all this, one also has to ask some questions about the whole fractional reserve banking 'con', that exacerbates the whole system in times of crisis. The question is, could we live without it? Or has it become the engine that drives our economies?
Evolution and biology, programming, and the development of EGOR, an AI (artificial intelligence).
Thursday, 16 October 2008
The Credit Crunch
A while since I posted, but I've been on holiday and since I came back I've been non stop having to deal with the symptoms of the financial crisis. It's certainly made me read and understand a lot more about economics. I'm still far from knowledgeable about it all.. but here are a few thoughts on the matter.
First to set the scenario:
Over the past year there has been increasing worry in the banking sector about the prescence of 'sub prime' loans in the system. In the USA in particular, the banks had been so eager to keep making profit they had loaned money to people to buy houses, where they had no hope of ever paying off the mortgage.
Instead of keeping this debt on their own books, these banks had packaged up these debts and sold them on to other international banks, while presumably downplaying the risk involved that the debtors would default.
Of course if the debtors defaulted, there was always the house left as collateral for the loan - i.e. the banks could reclaim the house, sell it on, and get back the money that was owed.
However, an added problem is that house prices in the US (and the UK) have been dropping. In many cases also it could prove nigh impossible to sell on the houses once the house buyer defaulted. This meant that there were an awful lot of 'sub prime' loans in the international banking system that were not worth a hell of a lot.
Of course the original home loaning banks didn't really care too much - they had sold the toxic debt on. After all, there seems to have been little regulation in this industry.
Throughout the last year worries about this problem have been spreading throughout the global banking system. Banks work by taking deposits from us, then investing most of that money themselves, by mechanisms such as providing home loans, or lending it on to other banks to invest, at an inter bank lending rate (LIBOR). They keep a little cash on hand, just in case some depositors come asking for their money back, but the vast majority is out there invested somehow, making the bank interest.
Now what has happened is that banks have become worried that their lending neighbour banks may have become contaminated with lots of worthless sub prime loans on their books. As far as I can see, the books of the banks are confidential, so there seems to be games of rumours and chinese whispers as to which banks have lots of their money tied up in these worthless loans.
The problem is, that if I lend money to a bank that has lots of toxic loans, and that bank runs into liquidity problems as a result of the toxic loans, then I'm not sure if I'll get my money back. Now consider that this applies for both me as a depositor, AND other banks in the inter bank lending market.
The banks really don't want to risk lending to each other because of this high risk, and thus the interbank lending rate (LIBOR) is sky high. However that means that the only way banks can make money and operate is through their own cash from depositors, and making loans etc themselves rather than via other banks.
This results in the banks having to offer very high interest rates because they are desperate for the cash from depositors, which effectively means that in the past few years they have become less and less tied to central bank (e.g. bank of england) lending rates, i.e. they are ignoring the moves that politicians make, because free market forces have taken over.
It also means that with this small amount of working capital the banks have to be VERY careful about who they loan it to (to make profit), AND they will only loan it at high rates of interest, as they need to make money to survive.
This means many businesses (particularly small ones) will apply to their bank for a loan to operate, and be refused. Businesses are thus having to downsize, or go under, from lack of loan money, and thus a lot of people are going to lose their jobs. This job loss stage is just beginning. When people lose their jobs, or are worried about their jobs, they cut back their spending, thus lowering the money made by other businesses, leading to more redundancies etc. The cycle continues and we have a recession.
But wait!! It's even more complicated than that. There is an extra 'BONUS' risk. This can be quite tricky to understand, so I'll say it slowly:
Banks are basically more advanced versions of the 'money lenders' in the temples in bible stories etc. The idea is that if you are rich, you can either hold onto your current wealth, or you can make it grow even bigger by lending it out temporarily to other people, but charging them 'interest' or a percentage for the privilege of having this lending.
Of course if you are going to do this, you need some kind of mafia scenario, where you have enforcers to beat up your clients because many of them are very unreliable and will need 'persuasion' to pay back your loans.
I digress ... anyway this was the initial system rich people, lent out their money and got it back with interest. A few years later some bright sparks came up with the concept of a bank. Instead of having a rich guy provide the capital, a company (the bank) would build a big vault to ward off robbers, then offer citizens the ability to deposit their cash in the bank (to keep it safe).
The citizens were happy, they could keep their cash safe (or safer) than under their mattress, and the banks had capital, some of which they could lend out and charge interest to other businesses, home buyers etc.
The above is a simple banking system. There is however a problem even with this system. Because the bank has invested much of it's capital, if all the savers came to the bank at once and demanded their money back, they couldn't have it!! The bank would suffer a liquidity crisis (a technical way of saying they didn't have the cash) as it was tied up in loans to other people / businesses.
This scenario is called a 'run' on a bank. Providing people have confidence in their bank, then on average only a small percentage of savers will be asking for money out on any day .. matched approximately by other savers putting money in. In this way providing the bank keeps a reasonable amount of it's capital in cash form (not invested), it can stay solvent.
But wait, here's the mad bit. At some point along the line banks stopped using hard cash e.g. coins etc to lend, and started using in effect 'I owe you' notes, for lending and mortgages etc.
Then some incredibly bright spark(!) invented what is called fractional reserve banking. If a saver deposited say 100 pounds into their account at the bank, the bank would (theoretically) have 100 pounds it could then invest and lend out to e.g. a homebuyer somewhere. This is very logical.
One day the bank managers met up with themselves, and decided, they were reliable fellows - why not increase their potential to make profit, by allowing themselves to lend out MORE money than they had in deposits!! i.e. when a depositor gave them 100 pounds, this wasn't going to make them much interest on investments, so instead they would invest that 100, but also conjure up 900 from thin air, and invest that too!!
After all these notes they were issuing for mortages etc were only IOUs, they could write anything they wanted on them. And they were reliable sorts these bankers, providing everyone paid their loans back, then they would make 10x the profit, and no one would suspect a thing!!
So fractional reserve banking is kind of like a con trick, except it has become accepted as the conventional way of doing banking. That is because, in most cases, it works ... it applies 'leverage' and makes 10x the profit from the same amount of depositors money.
However this con can multiply the problems caused when a bank runs into trouble. In normal businesses, when they run into trouble, they go into administration, and the administrators split up the assets of the company that are left, sell them off and split the money left among the people that the company owes money.
But with a bank, most of the debts that the bank has, it made with IOUs, they were never backed by real money!! That means when a bank goes under, in many cases the IOUs will become almost worthless. This means that in a risky climate, banks are incredibly risky things to lend your money too.
And this is what is happening, the whole banking network is built on the fraction reserve 'con' trick, so the banks are incredibly wary of lending money to each other just in case one of them goes down. And if one goes down any banks that have lent money to it will suddenly find themselves a lot worse off, and in a position where they could go down. And then again any banks which rely on this second bank get taken down, and the cycle continues. The danger is that the whole banking system can fall over like a stack of dominos.
This is essentially as I understand it what happened in the wall street crash of 1929 and the following depression of the 1930s. Very large numbers of banks collapsed, millions of people lost their savings, and lots of businesses went under and their was massive unemployment.
The US government at the time believed strongly in the free market capitalist system 'to the end', and thus didn't provide any help when this situation occurred. It saw it as the 'weak banks' being taken out leaving only the fittest still standing.
Of course it doesn't actually always work like that. And they neglected to realise that the knock on effect of this would be a collapse of the rest of the economy, as everything in capitalism depends on the banking system, ie. banking is the backbone on which everything else lays. If the banking system goes, your whole system of society is at risk (you can end up with anarchy, everyone for themselves).
This time round the scenario is very similar. Most people have been unaware of the risks involved here, they are too busy watching 'big brother', or seeing what madonna or kylie are up to, they aren't 'interested' in financial matters. It reminds me of that scene in constantine, where keanu asks the woman 'do you believe in the devil?', 'no' she replies. 'Well he believes in you!'. It doesn't matter whether people have any interest in the financial system, they still are wholely dependent on it for practically everything in their lives.
This time round most casual observers make the same comments and mistakes that were made in the 1929 crash. 'It's the banks fault, let them go down'. Of course the stack of dominos would result, and the world could fall into the abyss. Quite frightening that these are also voters.
Luckily those making the decisions (well some of them) are a bit more versed in the hazards and the knock on effects. We stand on the edge of the precipice. As far as the governments are concerned they want to maintain the status quo. On the surface the problem is one of confidence. They want to restore confidence. Confidence on the one hand to depositors, in order to prevent runs on the banks. And confidence on the other hand to the banks so they will lend to each other, and hence make them more able to provide loans to businesses and homeowners that keep the economies of the world ticking.
The latest plan used by gordon brown, alistair darling, and now being followed to some extent in many countries, is to address these problems by providing capital (to prevent liquidity problems) and to provide guarantees to inter bank lending, to get the banks lending to each other. It is in effect trying to provide a giant band aid to the current banking structure / status quo.
Of course, because of the fractional reserve banking system, the figures involved are enormous, but hey the tax payers have no choice, they elected their governments... Besides it's just going on the countries own debts (they each seem to have made some kind of international 'tab', another con perhaps?). And the argument is that if it works, it's only a loan because it's a 'guarantee' and insurance against the bank lending, everything should work smoothly. Shouldn't it?
Well now we are beginning to see the signs. The plan was accounced around a week ago in the UK, and rather more recently in other countries. The FTSE / dow jones etc all jumped on the news of the global 'bailout' for the banks. But now it is falling again. People are beginning to realise that the problems of trust are more endemic and are being very hard to solve .. they will probably take years to return to normal levels of trust (and I doubt they will without some kind of modifications to current systems).
The inter bank lending rate (LIBOR) in the uk at least hasn't responded as gordon brown / darling would have hoped. In short they still aren't lending. We are still on the edge of precipice. And what's more, many of the governments have 'shot their load'. They don't have infinite finance. They can't carry on pumping billions and trillions to prop up the banking system indefinitely. And the bad debts of lehman brothers are going to be looked at shortly. What other institutions might go down as a result of this? What will be the knock on effects of several european banks going down, in iceland, in britain, in france, in germany etc. What were the interlinks? Will the stack of dominos start to fall?
Interesting times!
First to set the scenario:
Over the past year there has been increasing worry in the banking sector about the prescence of 'sub prime' loans in the system. In the USA in particular, the banks had been so eager to keep making profit they had loaned money to people to buy houses, where they had no hope of ever paying off the mortgage.
Instead of keeping this debt on their own books, these banks had packaged up these debts and sold them on to other international banks, while presumably downplaying the risk involved that the debtors would default.
Of course if the debtors defaulted, there was always the house left as collateral for the loan - i.e. the banks could reclaim the house, sell it on, and get back the money that was owed.
However, an added problem is that house prices in the US (and the UK) have been dropping. In many cases also it could prove nigh impossible to sell on the houses once the house buyer defaulted. This meant that there were an awful lot of 'sub prime' loans in the international banking system that were not worth a hell of a lot.
Of course the original home loaning banks didn't really care too much - they had sold the toxic debt on. After all, there seems to have been little regulation in this industry.
Throughout the last year worries about this problem have been spreading throughout the global banking system. Banks work by taking deposits from us, then investing most of that money themselves, by mechanisms such as providing home loans, or lending it on to other banks to invest, at an inter bank lending rate (LIBOR). They keep a little cash on hand, just in case some depositors come asking for their money back, but the vast majority is out there invested somehow, making the bank interest.
Now what has happened is that banks have become worried that their lending neighbour banks may have become contaminated with lots of worthless sub prime loans on their books. As far as I can see, the books of the banks are confidential, so there seems to be games of rumours and chinese whispers as to which banks have lots of their money tied up in these worthless loans.
The problem is, that if I lend money to a bank that has lots of toxic loans, and that bank runs into liquidity problems as a result of the toxic loans, then I'm not sure if I'll get my money back. Now consider that this applies for both me as a depositor, AND other banks in the inter bank lending market.
The banks really don't want to risk lending to each other because of this high risk, and thus the interbank lending rate (LIBOR) is sky high. However that means that the only way banks can make money and operate is through their own cash from depositors, and making loans etc themselves rather than via other banks.
This results in the banks having to offer very high interest rates because they are desperate for the cash from depositors, which effectively means that in the past few years they have become less and less tied to central bank (e.g. bank of england) lending rates, i.e. they are ignoring the moves that politicians make, because free market forces have taken over.
It also means that with this small amount of working capital the banks have to be VERY careful about who they loan it to (to make profit), AND they will only loan it at high rates of interest, as they need to make money to survive.
This means many businesses (particularly small ones) will apply to their bank for a loan to operate, and be refused. Businesses are thus having to downsize, or go under, from lack of loan money, and thus a lot of people are going to lose their jobs. This job loss stage is just beginning. When people lose their jobs, or are worried about their jobs, they cut back their spending, thus lowering the money made by other businesses, leading to more redundancies etc. The cycle continues and we have a recession.
But wait!! It's even more complicated than that. There is an extra 'BONUS' risk. This can be quite tricky to understand, so I'll say it slowly:
Banks are basically more advanced versions of the 'money lenders' in the temples in bible stories etc. The idea is that if you are rich, you can either hold onto your current wealth, or you can make it grow even bigger by lending it out temporarily to other people, but charging them 'interest' or a percentage for the privilege of having this lending.
Of course if you are going to do this, you need some kind of mafia scenario, where you have enforcers to beat up your clients because many of them are very unreliable and will need 'persuasion' to pay back your loans.
I digress ... anyway this was the initial system rich people, lent out their money and got it back with interest. A few years later some bright sparks came up with the concept of a bank. Instead of having a rich guy provide the capital, a company (the bank) would build a big vault to ward off robbers, then offer citizens the ability to deposit their cash in the bank (to keep it safe).
The citizens were happy, they could keep their cash safe (or safer) than under their mattress, and the banks had capital, some of which they could lend out and charge interest to other businesses, home buyers etc.
The above is a simple banking system. There is however a problem even with this system. Because the bank has invested much of it's capital, if all the savers came to the bank at once and demanded their money back, they couldn't have it!! The bank would suffer a liquidity crisis (a technical way of saying they didn't have the cash) as it was tied up in loans to other people / businesses.
This scenario is called a 'run' on a bank. Providing people have confidence in their bank, then on average only a small percentage of savers will be asking for money out on any day .. matched approximately by other savers putting money in. In this way providing the bank keeps a reasonable amount of it's capital in cash form (not invested), it can stay solvent.
But wait, here's the mad bit. At some point along the line banks stopped using hard cash e.g. coins etc to lend, and started using in effect 'I owe you' notes, for lending and mortgages etc.
Then some incredibly bright spark(!) invented what is called fractional reserve banking. If a saver deposited say 100 pounds into their account at the bank, the bank would (theoretically) have 100 pounds it could then invest and lend out to e.g. a homebuyer somewhere. This is very logical.
One day the bank managers met up with themselves, and decided, they were reliable fellows - why not increase their potential to make profit, by allowing themselves to lend out MORE money than they had in deposits!! i.e. when a depositor gave them 100 pounds, this wasn't going to make them much interest on investments, so instead they would invest that 100, but also conjure up 900 from thin air, and invest that too!!
After all these notes they were issuing for mortages etc were only IOUs, they could write anything they wanted on them. And they were reliable sorts these bankers, providing everyone paid their loans back, then they would make 10x the profit, and no one would suspect a thing!!
So fractional reserve banking is kind of like a con trick, except it has become accepted as the conventional way of doing banking. That is because, in most cases, it works ... it applies 'leverage' and makes 10x the profit from the same amount of depositors money.
However this con can multiply the problems caused when a bank runs into trouble. In normal businesses, when they run into trouble, they go into administration, and the administrators split up the assets of the company that are left, sell them off and split the money left among the people that the company owes money.
But with a bank, most of the debts that the bank has, it made with IOUs, they were never backed by real money!! That means when a bank goes under, in many cases the IOUs will become almost worthless. This means that in a risky climate, banks are incredibly risky things to lend your money too.
And this is what is happening, the whole banking network is built on the fraction reserve 'con' trick, so the banks are incredibly wary of lending money to each other just in case one of them goes down. And if one goes down any banks that have lent money to it will suddenly find themselves a lot worse off, and in a position where they could go down. And then again any banks which rely on this second bank get taken down, and the cycle continues. The danger is that the whole banking system can fall over like a stack of dominos.
This is essentially as I understand it what happened in the wall street crash of 1929 and the following depression of the 1930s. Very large numbers of banks collapsed, millions of people lost their savings, and lots of businesses went under and their was massive unemployment.
The US government at the time believed strongly in the free market capitalist system 'to the end', and thus didn't provide any help when this situation occurred. It saw it as the 'weak banks' being taken out leaving only the fittest still standing.
Of course it doesn't actually always work like that. And they neglected to realise that the knock on effect of this would be a collapse of the rest of the economy, as everything in capitalism depends on the banking system, ie. banking is the backbone on which everything else lays. If the banking system goes, your whole system of society is at risk (you can end up with anarchy, everyone for themselves).
This time round the scenario is very similar. Most people have been unaware of the risks involved here, they are too busy watching 'big brother', or seeing what madonna or kylie are up to, they aren't 'interested' in financial matters. It reminds me of that scene in constantine, where keanu asks the woman 'do you believe in the devil?', 'no' she replies. 'Well he believes in you!'. It doesn't matter whether people have any interest in the financial system, they still are wholely dependent on it for practically everything in their lives.
This time round most casual observers make the same comments and mistakes that were made in the 1929 crash. 'It's the banks fault, let them go down'. Of course the stack of dominos would result, and the world could fall into the abyss. Quite frightening that these are also voters.
Luckily those making the decisions (well some of them) are a bit more versed in the hazards and the knock on effects. We stand on the edge of the precipice. As far as the governments are concerned they want to maintain the status quo. On the surface the problem is one of confidence. They want to restore confidence. Confidence on the one hand to depositors, in order to prevent runs on the banks. And confidence on the other hand to the banks so they will lend to each other, and hence make them more able to provide loans to businesses and homeowners that keep the economies of the world ticking.
The latest plan used by gordon brown, alistair darling, and now being followed to some extent in many countries, is to address these problems by providing capital (to prevent liquidity problems) and to provide guarantees to inter bank lending, to get the banks lending to each other. It is in effect trying to provide a giant band aid to the current banking structure / status quo.
Of course, because of the fractional reserve banking system, the figures involved are enormous, but hey the tax payers have no choice, they elected their governments... Besides it's just going on the countries own debts (they each seem to have made some kind of international 'tab', another con perhaps?). And the argument is that if it works, it's only a loan because it's a 'guarantee' and insurance against the bank lending, everything should work smoothly. Shouldn't it?
Well now we are beginning to see the signs. The plan was accounced around a week ago in the UK, and rather more recently in other countries. The FTSE / dow jones etc all jumped on the news of the global 'bailout' for the banks. But now it is falling again. People are beginning to realise that the problems of trust are more endemic and are being very hard to solve .. they will probably take years to return to normal levels of trust (and I doubt they will without some kind of modifications to current systems).
The inter bank lending rate (LIBOR) in the uk at least hasn't responded as gordon brown / darling would have hoped. In short they still aren't lending. We are still on the edge of precipice. And what's more, many of the governments have 'shot their load'. They don't have infinite finance. They can't carry on pumping billions and trillions to prop up the banking system indefinitely. And the bad debts of lehman brothers are going to be looked at shortly. What other institutions might go down as a result of this? What will be the knock on effects of several european banks going down, in iceland, in britain, in france, in germany etc. What were the interlinks? Will the stack of dominos start to fall?
Interesting times!
Wednesday, 30 July 2008
Knowledge Storage
I'm back at work on Egor now.
While the old version handled sentences such as 'what is a cat?', I want to now extend this fully to WH questions (what, why, where etc):
Thus questions such as 'where do you live?'. The old system was a bit of a bodge. Now, when a question such as this is formulated, it adds an entry for the 'WH-word' unknown into the knowledge tree... it can either identify the answer now or perhaps come up with the answer at a later time when it has more knowledge.
An interesting thing happens when you look at slightly more complex variants of these questions.
For instance:
------------------------------------
The cat eats sardines in the kitchen.
The cat eats mice in the garden.
Where does the cat eat sardines?
------------------------------------
Initially I was storing the information that the cat eats sardines, and the cat eats mice on separate branches (sub trees) from the subject. However, it occurred that reusing branches may be the way to go, both in terms of efficient compression of information, but also in terms of speedy and efficient access to the information.
However, once you start compressing the information, another 'issue' appears:
If you store, 'the cat eats sardines in the kitchen' in one tree, it essentially doesn't matter the order of the object and supplementary information...
i.e. the cat eats in the kitchen sardines = the cat eats sardines in the kitchen.
Once you start compressing several sentences of information in the same subtree, you then have to start considering the order of information.
Thus: The cat eats sardines in the kitchen, The cat eats tuna in the kitchen...
You may start to think of this as a hierarchy: cat -> eats -> in the kitchen -> sardines / tuna
However, this has many implications. Firstly you can no longer directly store information as generics (i.e. in tree terms the 'in the kitchen' needs to be distinct and have child nodes). This is an added level of complexity - so we would have to be sure we were getting a payback for that complexity.
In addition, once you start to consider several pieces of supplementary information for a sentence, the optimum storage arrangement may not be obvious (i.e. how are you going to regularly access this information determines the best tree structure).
As I am modelling things according to how biological systems tend to work .. there is also the point that biological systems often take the simplest path (making complexity from simple rules) rather than working with a complex 'operating system'. I.e. there is a danger of anthromorphosizing the problem - producing a computer science solution instead of a simpler (possible) biological solution.
I am not sure which one to go with at the moment, because it seems a major design issue. I may well start by experimenting with the simple approach. It may turn out to be incorrect (and later need a considerable rewrite), but the fact is that the whole project is a huge undertaking and I would rather have a simple system working than a more complex system that I didn't have nearly enough time to get to a working state.
In essence I can't hope to get everything perfectly right and optimal on my first attempts, I think this is something that will be refined in many decades to come, to one or several optimal solutions.
While the old version handled sentences such as 'what is a cat?', I want to now extend this fully to WH questions (what, why, where etc):
Thus questions such as 'where do you live?'. The old system was a bit of a bodge. Now, when a question such as this is formulated, it adds an entry for the 'WH-word' unknown into the knowledge tree... it can either identify the answer now or perhaps come up with the answer at a later time when it has more knowledge.
An interesting thing happens when you look at slightly more complex variants of these questions.
For instance:
------------------------------------
The cat eats sardines in the kitchen.
The cat eats mice in the garden.
Where does the cat eat sardines?
------------------------------------
Initially I was storing the information that the cat eats sardines, and the cat eats mice on separate branches (sub trees) from the subject. However, it occurred that reusing branches may be the way to go, both in terms of efficient compression of information, but also in terms of speedy and efficient access to the information.
However, once you start compressing the information, another 'issue' appears:
If you store, 'the cat eats sardines in the kitchen' in one tree, it essentially doesn't matter the order of the object and supplementary information...
i.e. the cat eats in the kitchen sardines = the cat eats sardines in the kitchen.
Once you start compressing several sentences of information in the same subtree, you then have to start considering the order of information.
Thus: The cat eats sardines in the kitchen, The cat eats tuna in the kitchen...
You may start to think of this as a hierarchy: cat -> eats -> in the kitchen -> sardines / tuna
However, this has many implications. Firstly you can no longer directly store information as generics (i.e. in tree terms the 'in the kitchen' needs to be distinct and have child nodes). This is an added level of complexity - so we would have to be sure we were getting a payback for that complexity.
In addition, once you start to consider several pieces of supplementary information for a sentence, the optimum storage arrangement may not be obvious (i.e. how are you going to regularly access this information determines the best tree structure).
As I am modelling things according to how biological systems tend to work .. there is also the point that biological systems often take the simplest path (making complexity from simple rules) rather than working with a complex 'operating system'. I.e. there is a danger of anthromorphosizing the problem - producing a computer science solution instead of a simpler (possible) biological solution.
I am not sure which one to go with at the moment, because it seems a major design issue. I may well start by experimenting with the simple approach. It may turn out to be incorrect (and later need a considerable rewrite), but the fact is that the whole project is a huge undertaking and I would rather have a simple system working than a more complex system that I didn't have nearly enough time to get to a working state.
In essence I can't hope to get everything perfectly right and optimal on my first attempts, I think this is something that will be refined in many decades to come, to one or several optimal solutions.
Tuesday, 8 July 2008
Ending Poverty - Why Geldof's View is Naive
I read today how Geldof is again urging the G8 to 'help the poor' in africa.
A long time ago, do gooders in the 1st world countries noticed the poverty in third world countries, and decided that the best way they could help was by 'charity' and providing loans so these countries could supposedly get on their feet and support themselves to the same 'standards' of the first world countries.
What in fact happened was that money and aid was provided to corrupt governments, who mostly squandered it, leaving the country in debt for stupid amounts of money it had no hope of repaying, with the interest each year on the debt being too much to pay let alone the full amount. This is now generally regarded as a mistake and is referred to as the 'third world debt', and in some cases has been cancelled by the issuing countries.
Yet still there are those that believe that somehow these countries will only be able to advance if given sufficient pots of gold from the first world.
If we ignore the problem of corruption, and totally inappropriate aid (for example education in places where there is no opportunity to utilize that education), there is still an incredibly glaring reason why increasing aid is unlikely to reduce human poverty and misery.
It stems down to very basic population ecology - the concept of the 'carrying capacity'.
From wikipedia:
"The supportable population of an organism, given the food, habitat, water and other necessities available within an ecosystem is known as the ecosystem's carrying capacity for that organism. For the human population more complex variables such as sanitation and medical care are sometimes considered as part of the necessary infrastructure.
As population density increases, birth rate often decrease and death rates typically increase. The difference between the birth rate and the death rate is the "natural increase." The carrying capacity could support a positive natural increase, or could require a negative natural increase. Carrying capacity is thus the number of individuals an environment can support without significant negative impacts to the given organism and its environment. A factor that keeps population size at equilibrium is known as a regulating factor. The origins of the term lie in its use in the shipping industry to describe freight capacity, and a recent review finds the first use of the term in an 1845 report by the US Secretary of State to the Senate (Sayre, 2007).
Below carrying capacity, populations typically increase, while above, they typically decrease. Population size decreases above carrying capacity due to a range of factors depending on the species concerned, but can include insufficient space, food supply, or sunlight. The carrying capacity of an environment may vary for different species and may change over time due to a variety of factors including: food availability; water supply; environmental conditions; and living space."
In many third world countries such as africa there is a tendency for large families. That is, with no social security or pensions, a family depends on their children for survival and prosperity. Thus there is as in many species the tendency for the population to increase dramatically over time, if we reduce factors such as disease, war and malnutrition.
In the areas that cause the most concern the population is often by and large limited by these 'misery' factors, such as poverty and disease.
So you have in a village for example, 500 people leading an ok life, and 500 people living in absolute misery, on the brink of death.
Now let's have a look at what happens when you apply aid from the 1st world.
Initially there is much happiness as all of those 1000 people are released from complete poverty and can live an ok life.
However, the problem comes when you consider the population size over time. With this extra help, more of the population live to an older age, and produce many children. The aid that was once there either dries up, or best case stays at the previous level.
What you now end up with is for example, 2000 people living in the same area.
With aid removed, perhaps the land can support 500 people to live comfortably, and now 1500 are living in poverty!! Or best case you have 1500 people permanently dependent on outside financial support.
That's right, think about it for a second. By all that 'do gooding' action, you have let the population increase beyond it's carrying capacity, and you have in effect, tripled the human misery!!
This is why any 1st world intervention to supposedly 'help' a 3rd world country must be carefully planned - because you can see that in the majority of cases, it will result in an increase in suffering, rather than a decrease.
It would seem that the most obvious thing to do to decrease suffering in a harsh part of the world, is to limit the amount of children, so that those that do live there can be better supported by the environment.
A long time ago, do gooders in the 1st world countries noticed the poverty in third world countries, and decided that the best way they could help was by 'charity' and providing loans so these countries could supposedly get on their feet and support themselves to the same 'standards' of the first world countries.
What in fact happened was that money and aid was provided to corrupt governments, who mostly squandered it, leaving the country in debt for stupid amounts of money it had no hope of repaying, with the interest each year on the debt being too much to pay let alone the full amount. This is now generally regarded as a mistake and is referred to as the 'third world debt', and in some cases has been cancelled by the issuing countries.
Yet still there are those that believe that somehow these countries will only be able to advance if given sufficient pots of gold from the first world.
If we ignore the problem of corruption, and totally inappropriate aid (for example education in places where there is no opportunity to utilize that education), there is still an incredibly glaring reason why increasing aid is unlikely to reduce human poverty and misery.
It stems down to very basic population ecology - the concept of the 'carrying capacity'.
From wikipedia:
"The supportable population of an organism, given the food, habitat, water and other necessities available within an ecosystem is known as the ecosystem's carrying capacity for that organism. For the human population more complex variables such as sanitation and medical care are sometimes considered as part of the necessary infrastructure.
As population density increases, birth rate often decrease and death rates typically increase. The difference between the birth rate and the death rate is the "natural increase." The carrying capacity could support a positive natural increase, or could require a negative natural increase. Carrying capacity is thus the number of individuals an environment can support without significant negative impacts to the given organism and its environment. A factor that keeps population size at equilibrium is known as a regulating factor. The origins of the term lie in its use in the shipping industry to describe freight capacity, and a recent review finds the first use of the term in an 1845 report by the US Secretary of State to the Senate (Sayre, 2007).
Below carrying capacity, populations typically increase, while above, they typically decrease. Population size decreases above carrying capacity due to a range of factors depending on the species concerned, but can include insufficient space, food supply, or sunlight. The carrying capacity of an environment may vary for different species and may change over time due to a variety of factors including: food availability; water supply; environmental conditions; and living space."
In many third world countries such as africa there is a tendency for large families. That is, with no social security or pensions, a family depends on their children for survival and prosperity. Thus there is as in many species the tendency for the population to increase dramatically over time, if we reduce factors such as disease, war and malnutrition.
In the areas that cause the most concern the population is often by and large limited by these 'misery' factors, such as poverty and disease.
So you have in a village for example, 500 people leading an ok life, and 500 people living in absolute misery, on the brink of death.
Now let's have a look at what happens when you apply aid from the 1st world.
Initially there is much happiness as all of those 1000 people are released from complete poverty and can live an ok life.
However, the problem comes when you consider the population size over time. With this extra help, more of the population live to an older age, and produce many children. The aid that was once there either dries up, or best case stays at the previous level.
What you now end up with is for example, 2000 people living in the same area.
With aid removed, perhaps the land can support 500 people to live comfortably, and now 1500 are living in poverty!! Or best case you have 1500 people permanently dependent on outside financial support.
That's right, think about it for a second. By all that 'do gooding' action, you have let the population increase beyond it's carrying capacity, and you have in effect, tripled the human misery!!
This is why any 1st world intervention to supposedly 'help' a 3rd world country must be carefully planned - because you can see that in the majority of cases, it will result in an increase in suffering, rather than a decrease.
It would seem that the most obvious thing to do to decrease suffering in a harsh part of the world, is to limit the amount of children, so that those that do live there can be better supported by the environment.
Sunday, 22 June 2008
Web Browser
It's been a bit of a gap since my last post, I managed to get sidetracked to doing some work on a website I setup last year .. which involved lots of php, mysql and javascript. So now I'm back ready for some 'proper' coding I needed to do a refresher on c++, so I've decided to have a quick go at a second version of a web browser I wrote a couple of years ago.
The 'skanky sea dog' web browser is just a bit of fun really .. I'm just doing it as a little learning project so I can learn the details of how html / web servers work. The first version was very simple, it downloaded the html for a page, and did some incredibly basic processing of the html to show some text on the screen, and download some of the images (more a proof of concept).
This time, after learning a bit of css and javascript, I have a bit better understanding of how the html DOM (document object model) works, and as it is a tree structure, and I have a fair bit of experience at dealing with tree structures (they seem to crop up everywhere in coding), I thought i'd give it a go at parsing the html into a tree of c++ objects, with different types for different html tags.
The parsing was actually not all that tricky, I had earlier written an xml parser, so I modified it to get some useful functions for html parsing, then allowed the tree to 'build itself' by parsing the html - i.e. it encounters an [html] tag, it creates an html node, and begins parsing for child objects within this node. When it finally encounters the [/html] tag, it is finished with the node, and moves up the the higher node on the tree (in this case the document node), until all the document has been processed.
For rendering, I knew that I had to somehow implement a version of the html 'box model', i.e. child elements determining the size of parent elements, or parents constraining the size of child elements etc. I have no idea how firefox and IE handle this, but I have done it using the old trick of traversing the tree.
I do several passes, down the tree to the leaf nodes, then up again to the root, doing different operations on each pass, gradually refining the box models for each element (things like widths, heights, minimum width, desired width, offsets etc).
This seems to be working, and now I'm doing some refinements for tables to allow the columns and rows to line up.
For rendering, I wanted ideally to make things cross platform, but as I'm only familiar with windows, I've tried to separate things a bit for the rendering code. Each html element has it's own win32 window. Whether that's a good or bad idea I don't know yet .. I basically had the choice to draw everything manually, or rely on the win32 techniques. While I like doing everything manually, win32 normally makes it an enormous pain the ass to do anything manually, so I'll go with the flow for now and see what happens.
Incidently finding the sizes of image elements is easy .. text elements is more tricky. I used things like getting the text metrics and finding the pixels used by each word in turn to determine the minimum widths and desired widths for text elements. In fact in some elements (e.g. [div] or [body]) you can have text elements intermingled with images etc so it is slightly more complex determining widths etc but it is very doable.
I get the feeling that while the overall structure of the browser code is quite well laid out, the intricasies could quickly become a bit of a rats nest because I think there will be so many 'special cases'. This may be part of the explanation for the differences in behaviour between Internet Explorer and Firefox, however I think there is probably some fundamental difference in their box model calculation method which leads to their 'quirks'.
The 'skanky sea dog' web browser is just a bit of fun really .. I'm just doing it as a little learning project so I can learn the details of how html / web servers work. The first version was very simple, it downloaded the html for a page, and did some incredibly basic processing of the html to show some text on the screen, and download some of the images (more a proof of concept).
This time, after learning a bit of css and javascript, I have a bit better understanding of how the html DOM (document object model) works, and as it is a tree structure, and I have a fair bit of experience at dealing with tree structures (they seem to crop up everywhere in coding), I thought i'd give it a go at parsing the html into a tree of c++ objects, with different types for different html tags.
The parsing was actually not all that tricky, I had earlier written an xml parser, so I modified it to get some useful functions for html parsing, then allowed the tree to 'build itself' by parsing the html - i.e. it encounters an [html] tag, it creates an html node, and begins parsing for child objects within this node. When it finally encounters the [/html] tag, it is finished with the node, and moves up the the higher node on the tree (in this case the document node), until all the document has been processed.
For rendering, I knew that I had to somehow implement a version of the html 'box model', i.e. child elements determining the size of parent elements, or parents constraining the size of child elements etc. I have no idea how firefox and IE handle this, but I have done it using the old trick of traversing the tree.
I do several passes, down the tree to the leaf nodes, then up again to the root, doing different operations on each pass, gradually refining the box models for each element (things like widths, heights, minimum width, desired width, offsets etc).
This seems to be working, and now I'm doing some refinements for tables to allow the columns and rows to line up.
For rendering, I wanted ideally to make things cross platform, but as I'm only familiar with windows, I've tried to separate things a bit for the rendering code. Each html element has it's own win32 window. Whether that's a good or bad idea I don't know yet .. I basically had the choice to draw everything manually, or rely on the win32 techniques. While I like doing everything manually, win32 normally makes it an enormous pain the ass to do anything manually, so I'll go with the flow for now and see what happens.
Incidently finding the sizes of image elements is easy .. text elements is more tricky. I used things like getting the text metrics and finding the pixels used by each word in turn to determine the minimum widths and desired widths for text elements. In fact in some elements (e.g. [div] or [body]) you can have text elements intermingled with images etc so it is slightly more complex determining widths etc but it is very doable.
I get the feeling that while the overall structure of the browser code is quite well laid out, the intricasies could quickly become a bit of a rats nest because I think there will be so many 'special cases'. This may be part of the explanation for the differences in behaviour between Internet Explorer and Firefox, however I think there is probably some fundamental difference in their box model calculation method which leads to their 'quirks'.
Monday, 21 April 2008
Egor videos
After a long gap, I've finally got back to doing some more work on Egor. It's actually been really good having a rest, and coming back with a fresh perspective.
Anyway I did a couple of videos this morning, showing some of the basics, hope you like!
Anyway I did a couple of videos this morning, showing some of the basics, hope you like!
Wednesday, 12 March 2008
The Solution to Spam
I was just reading today on the register about how spammers have defeated the CAPTCHA protection to stop automated registrations with the major email providers.
http://www.theregister.co.uk/2008/03/11/global_spam_trends/
Some time ago it struck me that there may be quite a simple solution to the whole spam issue, in fact I think my current ideas are based on a suggestion by none-other-than Bill Gates(!), who I think suggested having a 'stamp' or small cost associated with sending an email.
The idea is that if you can introduce a cost to sending email, no matter how small, it will deter spammers because when you are sending millions of emails, these costs rapidly add up and make the whole thing unprofitable.
I think this idea is really good, the variation I currently am thinking might be successful (I can't remember whether I read this or just came up with it after Bill's suggestion) is as follows:
Instead of having a fixed cost per-email, have every person creating an email account make a DEPOSIT. That is, a deposit of good faith to indicate that they are not going to use the account for spamming. Now the idea is, when someone receives an email that they regard as spam, they mark it as so, and through 'an undetermined mechanism', the sender loses their deposit.
There are 2 obvious variations here:
1) The deposit is large (say 10 pounds) and the sender loses access to their account on confirmation of spam activity.
2) The deposit is small (say 1 pound) and the sender loses the ability to send mail until they REPLACE the deposit.
The obvious problem is the problem of abuse by the receiver of the message. If they don't like you, or want to play a joke on you, they could classify your mail as spam and e.g. lose you a pound, which is unfair. So in a way you would need an impartial human to vet the reported spam and check it before giving the penalty.
Anything where there are humans involved becomes more costly. BUT WAIT!! If the spammer is losing their deposit, you can actually use this money to pay the checkers!! :)
It has occurred to me you could use a similar system for telephone spam too, if you receive an unwanted call, you simply press your SPAM button on the phone, and the caller loses their deposit.
Of course the issue is that a new email protocol would need to be used in order to prevent spoofing the sender, but if we bear in mind how long the existing system has been used, is it unreasonable to expect a revision, based on the experiences of use over the past 30 years? Any system undergoing widespread use usually shows flaws which can be corrected in a revision to the standard to make it more robust.
There is actually no reason why a new email standard could not be used in parallel to the existing system for a number of years, and if the benefits are significant, the market will adapt to using it.
http://www.theregister.co.uk/2008/03/11/global_spam_trends/
Some time ago it struck me that there may be quite a simple solution to the whole spam issue, in fact I think my current ideas are based on a suggestion by none-other-than Bill Gates(!), who I think suggested having a 'stamp' or small cost associated with sending an email.
The idea is that if you can introduce a cost to sending email, no matter how small, it will deter spammers because when you are sending millions of emails, these costs rapidly add up and make the whole thing unprofitable.
I think this idea is really good, the variation I currently am thinking might be successful (I can't remember whether I read this or just came up with it after Bill's suggestion) is as follows:
Instead of having a fixed cost per-email, have every person creating an email account make a DEPOSIT. That is, a deposit of good faith to indicate that they are not going to use the account for spamming. Now the idea is, when someone receives an email that they regard as spam, they mark it as so, and through 'an undetermined mechanism', the sender loses their deposit.
There are 2 obvious variations here:
1) The deposit is large (say 10 pounds) and the sender loses access to their account on confirmation of spam activity.
2) The deposit is small (say 1 pound) and the sender loses the ability to send mail until they REPLACE the deposit.
The obvious problem is the problem of abuse by the receiver of the message. If they don't like you, or want to play a joke on you, they could classify your mail as spam and e.g. lose you a pound, which is unfair. So in a way you would need an impartial human to vet the reported spam and check it before giving the penalty.
Anything where there are humans involved becomes more costly. BUT WAIT!! If the spammer is losing their deposit, you can actually use this money to pay the checkers!! :)
It has occurred to me you could use a similar system for telephone spam too, if you receive an unwanted call, you simply press your SPAM button on the phone, and the caller loses their deposit.
Of course the issue is that a new email protocol would need to be used in order to prevent spoofing the sender, but if we bear in mind how long the existing system has been used, is it unreasonable to expect a revision, based on the experiences of use over the past 30 years? Any system undergoing widespread use usually shows flaws which can be corrected in a revision to the standard to make it more robust.
There is actually no reason why a new email standard could not be used in parallel to the existing system for a number of years, and if the benefits are significant, the market will adapt to using it.
Wednesday, 5 March 2008
Music Composer App
As a little side project I've been writing a music composing application. I've been planning to for a few years now, just never got round to it! Actually I've learned a bit more win32 programming since I last did a music app around 10 years ago, so things are coming along quite quickly. Having said that, although I've done a lot of programming I haven't done a lot of windows GUI type coding, partly because lots of flashy user interface stuff doesn't interest me as much as the things going on underneath... so the user interface so far is pretty basic lol! Of course once I have the basics working I can photoshop up some graphics and make it look more flashy.
My aim is to produce something like fruityloops (FL studio) in functionality, but more geared towards composition. In the past I've found many sequencers very annoying because although they tend to be very versatile, and let you program any music, they tend to make it a very tedious process that takes many hours. I instead want a system designed for rapid composing, with lots of helpful tools and systems that are 'composer friendly' instead of being 'tech friendly'.
The other thing I used to find really annoying back in the old days of MIDI and multitrack tapes was the nightmare of getting everything in sync. I completely solve this problem in my app by placing samples / instruments in precisely calculated accuracy, sample accurate so accurate to 1/44100th of a second typically. Of course to get bang on sync, you also need to make sure your samples start on the B of the Bang, so I have tools for simplifying this. The interesting case is for instruments with a slow attack (such as a slow bowed string sound). To get this tight, should you chop off the attack, or start playing the sound BEFORE the 'note start'? Some interesting questions that a human player would do naturally, but a computer needs a plan of action for this type of thing.
My aim is to produce something like fruityloops (FL studio) in functionality, but more geared towards composition. In the past I've found many sequencers very annoying because although they tend to be very versatile, and let you program any music, they tend to make it a very tedious process that takes many hours. I instead want a system designed for rapid composing, with lots of helpful tools and systems that are 'composer friendly' instead of being 'tech friendly'.
The other thing I used to find really annoying back in the old days of MIDI and multitrack tapes was the nightmare of getting everything in sync. I completely solve this problem in my app by placing samples / instruments in precisely calculated accuracy, sample accurate so accurate to 1/44100th of a second typically. Of course to get bang on sync, you also need to make sure your samples start on the B of the Bang, so I have tools for simplifying this. The interesting case is for instruments with a slow attack (such as a slow bowed string sound). To get this tight, should you chop off the attack, or start playing the sound BEFORE the 'note start'? Some interesting questions that a human player would do naturally, but a computer needs a plan of action for this type of thing.
Tuesday, 29 January 2008
The saving culture
http://news.bbc.co.uk/1/hi/health/7214709.stm
While I think it's right people complain (and they should, to bring attention to such issues), from what I have read the problem from the post war years is that the proportion of the population that are elderly is increasing as we are leading longer lives with medical advances.
The big question seems to be - who should be responsible for looking after an individual when they reach an age when they cannot look after themselves? Should it be the state? Should it be their family? Or should they have saved over their lifetime to provide finance for their care in old age?
My own belief is that in the UK the great thing that is lacking is the saving mindset. Those that are financially successful longterm tend to have the saving mindset, and those that end up depending on state benefits tend (on average) to not be so good in this area.
For myself I have been lucky in this respect ... firstly for being brought up by my parents as a saver, and secondly having their financial support. I guess as everyone 'comes from somewhere' it makes it hard to have unbiased views on the topic. But objectively speaking I find it sad that some of my friends that run into financial difficulty, are in a sense 'addicted' to spending what money they do have.
Over a lifetime, in a vaguely capitalist system, I believe that people should be encouraged to save and build up their savings, so that they can form a buffer to look after themselves and their loved ones in difficult circumstances. The most unfortunate thing, coupled with the mindset problem mentioned earlier, is that governments (particularly the UK government) routinely penalize people for saving.
People who have very little and depend to a certain extent on extra benefits are hugely discouraged from saving. The moment they start to put away money each month for their longterm benefit, the government will correspondingly reduce any state benefits they receive (childcare etc). This means in practice that saved money is wasted money - the individual sees no advantage. Instead if someone has 200 pounds left over in any given month, they are better off spending it on a TV set or some asset, because this is not 'counted against' the individual in the decision as to state benefit rates.
Hence you get the bizarre situation where thousands of people living off benefits have council houses stocked up with the latest hi tech gadetry, playstation 3s, xbox 360s, plasma TVs, sky etc etc... to the extent that they have more gadetry than many middle income self supporting familys!! It is bizarre but makes total sense, given the benefit system.
Another example of such a problem is the situation faced by many single mothers. They can end up in a situation where financially they are better off not working, than working! Many examples I have found where single mothers do work, is not in order to increase income, but in order to feel as though they aren't dependent on a state handout. Indeed often until a single mother is earning quite a considerable amount (well over 20K) there is no significant financial benefit to working!!
While state benefits should provide a backup solution to help people in need, there is clearly something very broken in the UK system, where individuals see no personal advantage in getting themselves off the breadline. Capitalism works only when people will see a reward for their effort. If you want to get the millions on the breadline making some contribution to their own welfare, you simply have to make it worth their while.
Elderly and disabled people in England are increasingly being denied social services, a report says.The Commission for Social Care Inspection said councils were tightening their criteria which determines who is eligible for care.
The watchdog said the situation meant there were 275,000 people in need of help receiving none while another 450,000 suffered shortfalls in care.
The BBC have on this topic run a 'have your say' section so people can put forward their thoughts. Most have complained that the government should do more to look after old people, and how they have paid their taxes etc.
While I think it's right people complain (and they should, to bring attention to such issues), from what I have read the problem from the post war years is that the proportion of the population that are elderly is increasing as we are leading longer lives with medical advances.
The big question seems to be - who should be responsible for looking after an individual when they reach an age when they cannot look after themselves? Should it be the state? Should it be their family? Or should they have saved over their lifetime to provide finance for their care in old age?
My own belief is that in the UK the great thing that is lacking is the saving mindset. Those that are financially successful longterm tend to have the saving mindset, and those that end up depending on state benefits tend (on average) to not be so good in this area.
For myself I have been lucky in this respect ... firstly for being brought up by my parents as a saver, and secondly having their financial support. I guess as everyone 'comes from somewhere' it makes it hard to have unbiased views on the topic. But objectively speaking I find it sad that some of my friends that run into financial difficulty, are in a sense 'addicted' to spending what money they do have.
Over a lifetime, in a vaguely capitalist system, I believe that people should be encouraged to save and build up their savings, so that they can form a buffer to look after themselves and their loved ones in difficult circumstances. The most unfortunate thing, coupled with the mindset problem mentioned earlier, is that governments (particularly the UK government) routinely penalize people for saving.
People who have very little and depend to a certain extent on extra benefits are hugely discouraged from saving. The moment they start to put away money each month for their longterm benefit, the government will correspondingly reduce any state benefits they receive (childcare etc). This means in practice that saved money is wasted money - the individual sees no advantage. Instead if someone has 200 pounds left over in any given month, they are better off spending it on a TV set or some asset, because this is not 'counted against' the individual in the decision as to state benefit rates.
Hence you get the bizarre situation where thousands of people living off benefits have council houses stocked up with the latest hi tech gadetry, playstation 3s, xbox 360s, plasma TVs, sky etc etc... to the extent that they have more gadetry than many middle income self supporting familys!! It is bizarre but makes total sense, given the benefit system.
Another example of such a problem is the situation faced by many single mothers. They can end up in a situation where financially they are better off not working, than working! Many examples I have found where single mothers do work, is not in order to increase income, but in order to feel as though they aren't dependent on a state handout. Indeed often until a single mother is earning quite a considerable amount (well over 20K) there is no significant financial benefit to working!!
While state benefits should provide a backup solution to help people in need, there is clearly something very broken in the UK system, where individuals see no personal advantage in getting themselves off the breadline. Capitalism works only when people will see a reward for their effort. If you want to get the millions on the breadline making some contribution to their own welfare, you simply have to make it worth their while.
Wednesday, 23 January 2008
Straight A's no longer enough for top universities
I see they are introducing a new 'A*' grade for A levels, because so many pupils are getting A that the universities can't select on that basis:
http://uk.news.yahoo.com/rtrs/20080123/tuk-uk-britain-education-exams-fa6b408_3.html
Of course it's not anything to do with the schools being ranked on their grade performance, obviously teaching has got orders of magnitude better than when we were at school *sarcasm*. Such a magnitude of change is unlikely to be due to genetics, so must either be due to the environment, or the marking system.
It's almost inevitable that this will continue to happen, given that schools are 'marked' in league tables on this basis. What they should do (in addition perhaps) is introduce a mark similar to the IQ mark:
While IQ tests vary considerably, there is a built in 'normalization' for the result. That is, if you give 10,000 people an IQ test, the average mark will always be 100, BY DEFINITION. If it's an easy test, you'll STILL get the same people tending to score above 100, and similar people scoring below 100. So the actual mark in the test is passed through a mathematical function which compensates for the population result, to give a more standardized result (the IQ).
The same process can be applied to any exam, and if applied to A level results would give a clear, fair and consistent means for universities to select. An additional benefit is that this process could be used to correct for the inherent 'easyness' of some subject choices over others.
Thus the current A, B, C etc scale could be used as an ABSOLUTE measure of performance (poor as it is), and a normalized scale similar to IQ could be used as a RELATIVE measure of performance (more suited for selection).
LONDON (Reuters) - Achieving three A grades at A-level will no longer be enough to ensure a place at a top university, academics warned on Wednesday.From September sixth-formers will begin studying A-level exams which will include a higher grade of A* for those getting marks of 90 percent or above in their papers.
http://uk.news.yahoo.com/rtrs/20080123/tuk-uk-britain-education-exams-fa6b408_3.html
Of course it's not anything to do with the schools being ranked on their grade performance, obviously teaching has got orders of magnitude better than when we were at school *sarcasm*. Such a magnitude of change is unlikely to be due to genetics, so must either be due to the environment, or the marking system.
It's almost inevitable that this will continue to happen, given that schools are 'marked' in league tables on this basis. What they should do (in addition perhaps) is introduce a mark similar to the IQ mark:
While IQ tests vary considerably, there is a built in 'normalization' for the result. That is, if you give 10,000 people an IQ test, the average mark will always be 100, BY DEFINITION. If it's an easy test, you'll STILL get the same people tending to score above 100, and similar people scoring below 100. So the actual mark in the test is passed through a mathematical function which compensates for the population result, to give a more standardized result (the IQ).
The same process can be applied to any exam, and if applied to A level results would give a clear, fair and consistent means for universities to select. An additional benefit is that this process could be used to correct for the inherent 'easyness' of some subject choices over others.
Thus the current A, B, C etc scale could be used as an ABSOLUTE measure of performance (poor as it is), and a normalized scale similar to IQ could be used as a RELATIVE measure of performance (more suited for selection).
Monday, 14 January 2008
Tesco Online Website
Tonight I'm going to talk about a subject dear to my heart. The tesco online website.
Oh dear.
Those of us in the UK will be familiar with Tesco, one of, if not the biggest UK food superstore chain. Many of us in the UK shop often shop online for our food rather than going to the store. Alright I'm lazy .. but I also don't drive, and carting back a huge shopping haul on a bike is not pleasant. So I am more than happy to pay the 5 quid or so delivery charge to have someone do this for me.
I first started using sainsburys online service, and was quite happy with it. However, I find sainsburys tend to be a bit expensive overall (for myself, I'm not so bothered about paying for premium quality) so I moved over to tesco.
I am usually very impressed by the way tesco picks my food in the store for me and delivers it with nice drivers, doing what can't be a super pleasant job. I am very happy with the service, apart from in one area - the website!
The tesco online website is SO BAD, it's almost beyond belief. Alright it looks very nice, but the problem is, it DOESN'T WORK. Now I'm not a complete newbie to website design myself, I've written several, and have a reasonable knowledge of web technology such as html, css, php and sql.
I am currently stuck tonight with no food, having for the umpteenth time spent over an hour attempting to shop at the website. My main web browser is firefox (like 20% or so of web surfers), and I have a totally up to date install and disabled all plugins (to give tesco online the benefit of the doubt).
If I'm lucky, I can login to the site. However, when I click on a link, for example to show 'my favourites' in order to place my order, 99% of the time the website just hangs. If I click the link again sometimes the site gets very confused indeed, and tries to download an .aspx file to my computer. No, I don't want an .aspx file, I would like to see the website, thank you very much.
After 15 mins trying this and getting nowhere I give up and fireup internet explorer, which I keep for situations like this. I have version 6 of IE (perhaps this is where I am slipping up, not being interested in updating more microsoft bloatware). With all the security turned down to minimum, internet explorer fails to even load the front page :( . Sometimes I have got further with IE on the tesco site, but not tonight.
I went back to firefox. By a stroke of luck I managed to be able to add some items to my shopping cart. My prebooked delivery slot had long since disappeared, probably due to me having to log in multiple times, and delete my cache repeatedly to get anything to load.
So my message to Tesco, the company would be simple. Whoever is in charge of your website, fire them. They are guilty of gross incompetance. A child could build a more servicable website. The problem is probably in part due to the use of .NET type microsoft software in combination with incompetant website design. If you please could, given the huge amount of profit you make every day, please please please hire some competant web designers to make you a working website.
I would advise writing one that doesn't rely on proven unworkable technology, and instead opt for something more commonly used and proven to be scalable. If you really can't do it, I'd probably write the website for you, free of charge, or gladly instruct your website team on the basics of software development.
Here's hoping.
P.S. I may end up having to go back to sainsburys or a competitor who has a website that works. I know this means nothing in the grand scheme of things, and probably tesco online sales are a very small proportion of their operation, but I really like the rest of their system, it just makes me incredibly sad that a large corporation with so many resources can get something so horribly, horribly wrong. :(
Oh dear.
Those of us in the UK will be familiar with Tesco, one of, if not the biggest UK food superstore chain. Many of us in the UK shop often shop online for our food rather than going to the store. Alright I'm lazy .. but I also don't drive, and carting back a huge shopping haul on a bike is not pleasant. So I am more than happy to pay the 5 quid or so delivery charge to have someone do this for me.
I first started using sainsburys online service, and was quite happy with it. However, I find sainsburys tend to be a bit expensive overall (for myself, I'm not so bothered about paying for premium quality) so I moved over to tesco.
I am usually very impressed by the way tesco picks my food in the store for me and delivers it with nice drivers, doing what can't be a super pleasant job. I am very happy with the service, apart from in one area - the website!
The tesco online website is SO BAD, it's almost beyond belief. Alright it looks very nice, but the problem is, it DOESN'T WORK. Now I'm not a complete newbie to website design myself, I've written several, and have a reasonable knowledge of web technology such as html, css, php and sql.
I am currently stuck tonight with no food, having for the umpteenth time spent over an hour attempting to shop at the website. My main web browser is firefox (like 20% or so of web surfers), and I have a totally up to date install and disabled all plugins (to give tesco online the benefit of the doubt).
If I'm lucky, I can login to the site. However, when I click on a link, for example to show 'my favourites' in order to place my order, 99% of the time the website just hangs. If I click the link again sometimes the site gets very confused indeed, and tries to download an .aspx file to my computer. No, I don't want an .aspx file, I would like to see the website, thank you very much.
After 15 mins trying this and getting nowhere I give up and fireup internet explorer, which I keep for situations like this. I have version 6 of IE (perhaps this is where I am slipping up, not being interested in updating more microsoft bloatware). With all the security turned down to minimum, internet explorer fails to even load the front page :( . Sometimes I have got further with IE on the tesco site, but not tonight.
I went back to firefox. By a stroke of luck I managed to be able to add some items to my shopping cart. My prebooked delivery slot had long since disappeared, probably due to me having to log in multiple times, and delete my cache repeatedly to get anything to load.
So my message to Tesco, the company would be simple. Whoever is in charge of your website, fire them. They are guilty of gross incompetance. A child could build a more servicable website. The problem is probably in part due to the use of .NET type microsoft software in combination with incompetant website design. If you please could, given the huge amount of profit you make every day, please please please hire some competant web designers to make you a working website.
I would advise writing one that doesn't rely on proven unworkable technology, and instead opt for something more commonly used and proven to be scalable. If you really can't do it, I'd probably write the website for you, free of charge, or gladly instruct your website team on the basics of software development.
Here's hoping.
P.S. I may end up having to go back to sainsburys or a competitor who has a website that works. I know this means nothing in the grand scheme of things, and probably tesco online sales are a very small proportion of their operation, but I really like the rest of their system, it just makes me incredibly sad that a large corporation with so many resources can get something so horribly, horribly wrong. :(
Thursday, 10 January 2008
Filesharing and the Information Age
I see here in the UK the government is attempting to put pressure on ISPs to do something about filesharing:
triesman_isps_legislation_timetable
The music publishers and movie industry have been continually putting pressure on governments to attempt to get them to toughen legislation against filesharing. In a way, I don't blame them, they are businesses, and seek to maximise their revenue.
The problem (for them) stems from the way once the internet was established across the world, basically designed and built as a means TO SHARE INFORMATION, then the old monopolies on putting value on information are breaking down.
In the documentary, 'Steal this film 2', this new paradigm is explored in some depth.
In the last century, information was a precious commodity, perhaps largely due to the difficulty of making copies. Copying out information by hand, and later via the printing press, was a costly enterprise, involving equipment, material, transport, shelf space, advertising costs, warehouses, etc etc.
The internet totally blows this old paradigm away. Making a digital copy of information is, in most instances, totally free, and produces a perfect copy, every time. In addition to this, the internet allows free advertising - information can spread in a viral fashion and other means, at no cost.
This means for the end user, if they can duplicate information from a friend, then they can have access to that information for free, whether it be an mp3, movie, game, application, or the design of a spaceship. The industry argument is that the end user is 'stealing' the music or film. However the end user argument would be, if they were not going to buy the information anyway, then there is no loss to the content producer, because they were never a potential customer.
The industry would claim the user is 'stealing' the film or music. However, in a way as there is no physical loss involved, steal is maybe not the right word, and legally speaking the action is a copyright infringement rather than stealing... it is also not dealt with by criminal law but by civil law, where the recourse of the content producer is to sue the end user for damages. However, in reality, the legal recourse in the simple case has no bite, because if you were to sue an end user for copying a movie, the economic loss would be the price of a movie ticket. The only way for industry prosecutions to have any 'bite', is for them to sue on the basis of an end user also being a file sharer, i.e. they publish the content on for other users to download. It should be obvious that it would be possible to claim greater financial damage for this act than for downloading.
The UK government is under pressure to give the impression of making some effort to preserve the status quo of copyright protection. This latest move seems to have the idea of passing the responsibility on to the ISPs, to prevent all those naughty people enjoying all that free information.
Although anti-piracy organisations can currently take advantage of the non-anonymous nature of several peer to peer protocols, in the long run, this approach will not work. It is based upon a fundamental misunderstanding of how the internet works.
The Difficulty of Eavesdropping
The internet works by sending little 'packets' of data around, from computer to computer, through wires, routers, switches, fibre optic cables etc etc. Each packet contains some basic information, like the address of the computer it should be delivered to. The rest of the packet, is arbitrary.
This means on a fundamental level, if a whole load of bytes are being transferred between one computer and another, it is very difficult (pretty much impossible) to determine what these bytes mean, once they have been encrypted. At the moment, most internet data is unencrypted, and it's pretty easy to 'packet sniff' simple packets conforming to well known protocols such as web page requests and other web browsing data.
If every packet floating through the internet was unencrypted, and had a nice header on it saying 'I am legitimate web browsing data', or 'I am illegal file sharing data' with the name of e.g. a movie in plain text, it would STILL be enormously difficult to monitor this data.
Consumer broadband connections typically could provide between 75k - 2000k per second of data. Now multiply this up by millions. That's a exceedingly large amount of data for any ISP to attempt to monitor.
Now the actual problem is FAR FAR more difficult than this. The problem for any 'snooper' is, that illegal filesharing traffic is not marked with a special flag to say 'HELLO EVERYONE!! I'M ILLEGAL FILE SHARING TRAFFIC!!'. Herein lies the problem with this whole approach. An ISP could capture all the data passing to and from a PC, send it to a team of IT forensic professionals, and STILL have absolutely no idea what the user was transferring. Fair enough if you already know you are looking for a certain DIVX compressed movie and have that data on file to monitor against, you could conceivably try to match each packet against the comparison file (although it would be horribly inefficient and take ages). And because you wouldn't know WHICH bit of content it was a priori, you'd have to compare it with EVERY PIRATE BIT OF CONTENT AVAILABLE in order to have a hope of getting a match.
Now that gives some idea of the extent of the problem for an ISP to try and monitor a SINGLE user. Now consider that that user is filesharing using, e.g. uTorrent, and decides, 'Hey, you know what, I don't want my ISP to know what I'm downloading, it's none of their business!!'. They go to their options and click a little tickbox which says 'ENCRYPTION'. With one move monitoring attempts are effectively screwed.
With unencrypted content it's INCREDIBLY difficult to monitor a users data flow. Once it's encrypted, it's pointless.
Here's an example of a filesharing packet captured in wireshark. Is it legal or illegal? How would you know? How would you prove it was illegal in a court of law?
If monitoring attempts to 'home in' on particular types of packets, coders will just modify the file sharing source code to make it mimick other packets. If you go simply on volume, there is no way to prove that a user is downloading a movie, versus for example a service pack update for their operating system. And if you want to just start disabling users because they have high traffic, well you might just as well switch off the internet.
Rather amusingly, in addition to this whole process being completely futile, there is another reason why ISPs REALLY don't want to start monitoring users data. That is a legal reason. At the moment there exists a provision where ISPs are not held responsible for the data that flows over their network, BECAUSE they cannot monitor it. This is known as the ISP defense. If ISPs do start monitoring data, then it opens the door for any content provider to sue them. Why didn't the ISP do anything about a user stealing their image? etc etc.
The Bittorrent Flaw
However, while all this is true on a theoretical level, there currently is a large security flaw in many peer to peer systems, particularly in run-of-the-mill bittorrent. It is this that will probably be taken advantage of, until anonymous protocols become widespread. The way the flaw works is this:
While it is currently very difficult to determine the contents of encrypted streams by 'eavesdropping on the wire', the enforcers don't actually have to. All they have to do is fire up their bittorrent client (or modifed version), choose to download a movie / mp3 that they own the rights to, then choose to examine the list of peers in the swarm. Yes, that's right folks, when you have a file available via bittorrent, the people who are downloading from you (in your swarm) can see your IP address, and along with a timestamp, that's all they need to track down your internet connection.
This has been the situation for a long time, and users have depended on safety in numbers ... i.e. the difficulty of prosecution. However there are moves in the UK whereby legislation may make prosecution easier for rights holders, so this is one to keep a watch on.
The Future
Ultimately what will happen in this 'arms race' is that users will simply move over to a more secure protocol / system. Already quite decent solutions are available for truly anonymous peer to peer traffic .. through networks such as I2P and TOR. However the reason the current anonymizing solutions have not become mainstream is that there is a cost to their anonymizing : It lowers the efficiency of file transfers, because the packets (as I understand it) have to travel through one or more intermediate computers in order to 'hide' the source and destination IP addresses from the two end points.
There are also other side effects - In order for those systems to work, and maintain plausible deniability, your PC must route through it traffic which has been requested by other PCs in the anonymous network. While this could be something perfectly innocent, it could also be something pretty heinous, and there have even been cases of people being charged for routing packets through TOR without their knowledge of what they contain. However, one should realise that this is the very nature of the internet. All the time routers and cables carry information that they have no knowledge of their content. Why should routing packets unintelligently through a PC be any different?
triesman_isps_legislation_timetable
The music publishers and movie industry have been continually putting pressure on governments to attempt to get them to toughen legislation against filesharing. In a way, I don't blame them, they are businesses, and seek to maximise their revenue.
The problem (for them) stems from the way once the internet was established across the world, basically designed and built as a means TO SHARE INFORMATION, then the old monopolies on putting value on information are breaking down.
In the documentary, 'Steal this film 2', this new paradigm is explored in some depth.
In the last century, information was a precious commodity, perhaps largely due to the difficulty of making copies. Copying out information by hand, and later via the printing press, was a costly enterprise, involving equipment, material, transport, shelf space, advertising costs, warehouses, etc etc.
The internet totally blows this old paradigm away. Making a digital copy of information is, in most instances, totally free, and produces a perfect copy, every time. In addition to this, the internet allows free advertising - information can spread in a viral fashion and other means, at no cost.
This means for the end user, if they can duplicate information from a friend, then they can have access to that information for free, whether it be an mp3, movie, game, application, or the design of a spaceship. The industry argument is that the end user is 'stealing' the music or film. However the end user argument would be, if they were not going to buy the information anyway, then there is no loss to the content producer, because they were never a potential customer.
The industry would claim the user is 'stealing' the film or music. However, in a way as there is no physical loss involved, steal is maybe not the right word, and legally speaking the action is a copyright infringement rather than stealing... it is also not dealt with by criminal law but by civil law, where the recourse of the content producer is to sue the end user for damages. However, in reality, the legal recourse in the simple case has no bite, because if you were to sue an end user for copying a movie, the economic loss would be the price of a movie ticket. The only way for industry prosecutions to have any 'bite', is for them to sue on the basis of an end user also being a file sharer, i.e. they publish the content on for other users to download. It should be obvious that it would be possible to claim greater financial damage for this act than for downloading.
The UK government is under pressure to give the impression of making some effort to preserve the status quo of copyright protection. This latest move seems to have the idea of passing the responsibility on to the ISPs, to prevent all those naughty people enjoying all that free information.
Although anti-piracy organisations can currently take advantage of the non-anonymous nature of several peer to peer protocols, in the long run, this approach will not work. It is based upon a fundamental misunderstanding of how the internet works.
The Difficulty of Eavesdropping
The internet works by sending little 'packets' of data around, from computer to computer, through wires, routers, switches, fibre optic cables etc etc. Each packet contains some basic information, like the address of the computer it should be delivered to. The rest of the packet, is arbitrary.
This means on a fundamental level, if a whole load of bytes are being transferred between one computer and another, it is very difficult (pretty much impossible) to determine what these bytes mean, once they have been encrypted. At the moment, most internet data is unencrypted, and it's pretty easy to 'packet sniff' simple packets conforming to well known protocols such as web page requests and other web browsing data.
If every packet floating through the internet was unencrypted, and had a nice header on it saying 'I am legitimate web browsing data', or 'I am illegal file sharing data' with the name of e.g. a movie in plain text, it would STILL be enormously difficult to monitor this data.
Consumer broadband connections typically could provide between 75k - 2000k per second of data. Now multiply this up by millions. That's a exceedingly large amount of data for any ISP to attempt to monitor.
Now the actual problem is FAR FAR more difficult than this. The problem for any 'snooper' is, that illegal filesharing traffic is not marked with a special flag to say 'HELLO EVERYONE!! I'M ILLEGAL FILE SHARING TRAFFIC!!'. Herein lies the problem with this whole approach. An ISP could capture all the data passing to and from a PC, send it to a team of IT forensic professionals, and STILL have absolutely no idea what the user was transferring. Fair enough if you already know you are looking for a certain DIVX compressed movie and have that data on file to monitor against, you could conceivably try to match each packet against the comparison file (although it would be horribly inefficient and take ages). And because you wouldn't know WHICH bit of content it was a priori, you'd have to compare it with EVERY PIRATE BIT OF CONTENT AVAILABLE in order to have a hope of getting a match.
Now that gives some idea of the extent of the problem for an ISP to try and monitor a SINGLE user. Now consider that that user is filesharing using, e.g. uTorrent, and decides, 'Hey, you know what, I don't want my ISP to know what I'm downloading, it's none of their business!!'. They go to their options and click a little tickbox which says 'ENCRYPTION'. With one move monitoring attempts are effectively screwed.
With unencrypted content it's INCREDIBLY difficult to monitor a users data flow. Once it's encrypted, it's pointless.
Here's an example of a filesharing packet captured in wireshark. Is it legal or illegal? How would you know? How would you prove it was illegal in a court of law?
If monitoring attempts to 'home in' on particular types of packets, coders will just modify the file sharing source code to make it mimick other packets. If you go simply on volume, there is no way to prove that a user is downloading a movie, versus for example a service pack update for their operating system. And if you want to just start disabling users because they have high traffic, well you might just as well switch off the internet.
Rather amusingly, in addition to this whole process being completely futile, there is another reason why ISPs REALLY don't want to start monitoring users data. That is a legal reason. At the moment there exists a provision where ISPs are not held responsible for the data that flows over their network, BECAUSE they cannot monitor it. This is known as the ISP defense. If ISPs do start monitoring data, then it opens the door for any content provider to sue them. Why didn't the ISP do anything about a user stealing their image? etc etc.
The Bittorrent Flaw
However, while all this is true on a theoretical level, there currently is a large security flaw in many peer to peer systems, particularly in run-of-the-mill bittorrent. It is this that will probably be taken advantage of, until anonymous protocols become widespread. The way the flaw works is this:
While it is currently very difficult to determine the contents of encrypted streams by 'eavesdropping on the wire', the enforcers don't actually have to. All they have to do is fire up their bittorrent client (or modifed version), choose to download a movie / mp3 that they own the rights to, then choose to examine the list of peers in the swarm. Yes, that's right folks, when you have a file available via bittorrent, the people who are downloading from you (in your swarm) can see your IP address, and along with a timestamp, that's all they need to track down your internet connection.
This has been the situation for a long time, and users have depended on safety in numbers ... i.e. the difficulty of prosecution. However there are moves in the UK whereby legislation may make prosecution easier for rights holders, so this is one to keep a watch on.
The Future
Ultimately what will happen in this 'arms race' is that users will simply move over to a more secure protocol / system. Already quite decent solutions are available for truly anonymous peer to peer traffic .. through networks such as I2P and TOR. However the reason the current anonymizing solutions have not become mainstream is that there is a cost to their anonymizing : It lowers the efficiency of file transfers, because the packets (as I understand it) have to travel through one or more intermediate computers in order to 'hide' the source and destination IP addresses from the two end points.
There are also other side effects - In order for those systems to work, and maintain plausible deniability, your PC must route through it traffic which has been requested by other PCs in the anonymous network. While this could be something perfectly innocent, it could also be something pretty heinous, and there have even been cases of people being charged for routing packets through TOR without their knowledge of what they contain. However, one should realise that this is the very nature of the internet. All the time routers and cables carry information that they have no knowledge of their content. Why should routing packets unintelligently through a PC be any different?
Tuesday, 8 January 2008
Passwords
I was going to write a post full of swearing and expletives, but thought better of it.
I have just spent the past hour trying to find out how to log back into this blogger account. The problem is, every website on the web wants you to have your own username and password to use it. Now that would be fine if there were just one or two websites. However once you find yourself using, say 10-20 websites, you have to start reusing usernames and passwords to have any chance of remembering your login.
So I (as I suspect many) have a rotation of 3 email addresses I use for registering at websites that I don't trust, such as google, where they can spam me with as much spam as they want (because I don't read those emails). I also use a rotation of 3 passwords corresponding to these in order to log into websites. Maybe someone will hack in, but frankly, I don't really care. These are throwaway emails, I'm not stupid enough to use my main emails.
Now this is great, but google want to have a 'google version' of my email and password to log into blogger. But the thing insists that I can't use any of the passwords that I already use that are easy for me to remember.
Oh no, that would be far to simple.
Instead I have to come up with some convoluted password, just in case osama bin laden himself tries to login to my account and use it to plan attacks on the free world.
And of course the upshot of this is, I naturally forget said password (and login details).
Cue spending 1 hour searching through my password books (I write them down so they are easier for thieves to steal) but I can't find it. I eventually by trial and error track down which email address (out of 7 I have) to use, then use blogger to reset the password.
Really there has got to be a simpler solution to all these password protected sites. It seems for most people about 100x more likely that they will lose their own password than it would be some 'hacker' would target them and try and login as them.
Really google what I'd like is, if we have a retarded short easy to guess password, let us use it for gawd's sake instead of insisting on fort knox security for the equivalent of our fridge. Why don't you add retina scanning and biometric face measurements while you are at it which screw up every time I get a haircut.
Thank you, rant over.
I have just spent the past hour trying to find out how to log back into this blogger account. The problem is, every website on the web wants you to have your own username and password to use it. Now that would be fine if there were just one or two websites. However once you find yourself using, say 10-20 websites, you have to start reusing usernames and passwords to have any chance of remembering your login.
So I (as I suspect many) have a rotation of 3 email addresses I use for registering at websites that I don't trust, such as google, where they can spam me with as much spam as they want (because I don't read those emails). I also use a rotation of 3 passwords corresponding to these in order to log into websites. Maybe someone will hack in, but frankly, I don't really care. These are throwaway emails, I'm not stupid enough to use my main emails.
Now this is great, but google want to have a 'google version' of my email and password to log into blogger. But the thing insists that I can't use any of the passwords that I already use that are easy for me to remember.
Oh no, that would be far to simple.
Instead I have to come up with some convoluted password, just in case osama bin laden himself tries to login to my account and use it to plan attacks on the free world.
And of course the upshot of this is, I naturally forget said password (and login details).
Cue spending 1 hour searching through my password books (I write them down so they are easier for thieves to steal) but I can't find it. I eventually by trial and error track down which email address (out of 7 I have) to use, then use blogger to reset the password.
Really there has got to be a simpler solution to all these password protected sites. It seems for most people about 100x more likely that they will lose their own password than it would be some 'hacker' would target them and try and login as them.
Really google what I'd like is, if we have a retarded short easy to guess password, let us use it for gawd's sake instead of insisting on fort knox security for the equivalent of our fridge. Why don't you add retina scanning and biometric face measurements while you are at it which screw up every time I get a haircut.
Thank you, rant over.
Subscribe to:
Posts (Atom)