Affect of factors on tariff
Having studied the various factors affecting tariff settings for power generation companies, we will now take a look at the significance of these factors in determining the actual rates or electricity or tariffs.
Economies of scale come into play in every segment of business and this is true even for electric power generation. We studied about the diversity factor and demand factor in our previous articles. Since the power supply company has to extract the costs of production and distribution from the people they supply electricity, they would be in a position to offer competitive rates to consumers if they can use the same electricity to supply a large number of people.
As we already discussed that the demand factor gives an idea of the percentage or proportion of power which is actually required compared to the total power which could be demanded, it is better for the company if it is as low as possible. Similarly the diversity factor should be as larger than unity as possible for the company. This in effect means that the company can have a large number of customers since it knows that all of them will not ask or demand for power at the same time. Hence it can keep its rates lower by spreading the costs over the large customer base.
On the other hand if the customer base is relatively small, the company would have to extract all the costs and the profit from them meaning thereby that the cost per unit of electricity would have to be quite high.
Another useful term related to tariff is the load factor and is calculated by dividing the average power with the maximum demand for specified intervals of time. It must be noted that it is not necessary for both time periods to be same. For example the average power could be for the whole year while the maximum demand could be for one hour. In such a case the load factor would have units of hour-yearly.
The term can be applied for both power generation equipments as well as for power consuming equipments. In case of the former the load factor can be simply calculated as the number of units which are generated divided by the max capacity which is possible. Say for example a power plant has a capacity of 5 MW while it generated and supplied only 2.5 MW of power in the given time, then the load factor is equal to 0.5 or 50%
Is more load a good thing?
Ideally one would assume that the load factor should be low which means that electric demand should be low as compared to installed capacity as this would mean lesser consumption of electricity. This might be a good idea in terms of environment, energy conservation and so forth. Unfortunately the reverse is true for the owner or the company generating the power.
The reason for this is that initially when the plant is setup the capacity and the machines installed are calculated based on output required. Bigger demand forecast means that bigger generators, alternators and transformers are installed. This in turn increases the initial fixed costs as well as other related expenses for the company. Now these expenses can only be justified and economically recovered if the demand is equal to, or at least very near to the full load requirements.
Hence strange though it may seem, the company generating power always wishes that load factor is as close to unity as possible.
Be careful before you start wasting electricity
I hope after reading this article you do not start using excess energy thinking that it is good for the company. Infact the company is in a much better position if the load factor is increased not by increased demand from a single group but by spreading its supply over large customer base. This way even energy conservation is taken care of, and it is economical for the generation company as well.
So in the end I would only like to reiterate that the basic postulates of energy conservation and environmental concerns still hold true so please take care to conserve as much electricity as possible. You wont be helping either the power plant company nor the environment if you waste electricity.