In the olden days, i.e. last week, software companies could reasonably use hardware CPU capacity as a rough and ready proxy for the relative value of software. Basically they could sell a two CPU license for about twice the cost of a one CPU license because the CPU count roughly modelled licensee value.
Today that compromise between ease of accounting and the quality of the value approximation just doesn't cut it. Dual core Windows/XP desktops, for example, run most applications at no more than 10% faster than single core machines, and even recent Linux kernels on Xeon lose scaling linearity very rapidly for most applications.
In other words a license to run on eight processors shouldn't be worth anywhere near eight times a single CPU license and even widespread vendor acknowledgement that multi-core CPUs should be counted per package rather than per core won't remedy the fundamental unfairness of current per processor pricing.
Solaris 10 on SPARC scales nearly linearly from one processor to, currently, 72 dual core USIV CPUs. Similarly, both Solaris 10 and some BSD variants offer nearly linear scaling on Xeon right up to the limits of Intel's maximum four way cache coherency.
In those environments, and within those processor limits, traditional licensing continues to be roughly as fair as it ever was.
The question, however, is what to do outside those environments. One option I'm growing enamoured of is for vendors to establish genuine throughput metrics and price accordingly. Both Oracle and SAP, for example, maintain benchmark suites and could easily develop a licensing cost scheme based on real relative performance and other vendors could then piggyback on those.
To make it work vendors would probably scale pricing only within broad target market groups: setting the standard differently, for example, for the Windows and Solaris/SPARC markets to avoid under pricing one and over pricing the other.
That would be reasonably fair and easy enough to administer that a majority of the market could get behind it.