The core claim made by N-tier architecture proponents is that this structure breaks an application into modular elements that can be separately maintained and used for more than one application.
In the simplest possible version of this you have at least one database server, at least one application server, and at least one client running the GUI (or "presentation layer"), accessing the application using the data.
Notice that if your application follows the simple paradigm: "get the data, do something with it, send the result to the user", you are theoretically implementing the N-Tier architecture but, in practice, the term is only applied if you do these things on different machines: a (physical) database server responds to a query from a (physical) application server which processes the result before sending it on to another (physical) application server or client.
When IBM first worked through the client-server idea in the late 1960s they soon recognized that running all application logic on a client computer separate from the database machine failed progressively on serialization issues as the number of clients increased. In response IBM simply abandoned a bad idea, centralized processing on the host, and developed a line of cheap and reliable terminals for the desktop. Thus when the System 38 came out in 1979 it came with System/R in microcode with host based RPG and the 5220 terminal instead of the original 5120 desktop computer with BASIC and APL.
During the 1980s, however, the pressure to find uses for the desktop PC led to a different serialization solution: stored procedures moved application logic into the database and thus became the basis for a client-server application industry that isn't client-server.
Originally Microsoft only made the client part of this, but over time they gained both OS software (from DEC) and database software (from Sybase) to enable the all Microsoft application stack - and it soon became clear that the most cost efficient way of actually delivering this while keeping the desktop PC is what most bigger organizations have now: the fully locked down desktop PC combining terminal functionality with PC costs, risks, and frustrations.
Back in the mid 90s, when Windows data center centralization was just getting started, NT and Intel limitations forced developers to split larger databases across multiple machines while limiting the logic each could run - so setting separate application processors into the same rackmounts with the database processor became the obvious way to compensate for the lack of in-the-box SMP scalability.
Over time the migration of functionality from the client PC to the data center meant that client processes combining results from several of these application servers also required dedicated servers - and so three tier became N-tier and a whole IT architecture was born.
What we have now therefore, are presentation managers (PCs acting as dumb terminals) to (optional) integration servers accessing applications servers accessing database servers - but what it all amounts to is a Rube Goldberg implementation of the same basic three tier (database, application, presentation) architecture built into the System/48 in 1970 and offered for sale as the System 38/5220 combination ten years later.