Menu Top
Classwise Concept with Examples
6th 7th 8th 9th 10th 11th 12th

Class 11th Chapters
1. Sets 2. Relations and Functions 3. Trigonometric Functions
4. Principle of Mathematical Induction 5. Complex Numbers and Quadratic Equations 6. Linear Inequalities
7. Permutations and Combinations 8. Binomial Theorem 9. Sequences and Series
10. Straight Lines 11. Conic Sections 12. Introduction to Three Dimensional Geometry
13. Limits and Derivatives 14. Mathematical Reasoning 15. Statistics
16. Probability

Content On This Page
Limits Standard Results on Limits Theorem on Limits
Theorem on Limits of Trigonometric Functions Limits of Exponential and Logarithmic Functions Derivatives
Standard Results on Derivatives


Chapter 13 Limits and Derivatives (Concepts)

Welcome to a pivotal chapter that marks the beginning of Calculus, a revolutionary branch of mathematics that deals with rates of change and accumulation. This chapter lays the essential groundwork by introducing two fundamental concepts: Limits and Derivatives. While previous algebraic studies often focused on static quantities and relationships, calculus provides the tools to analyze dynamic situations where quantities are changing. Understanding limits is the crucial first step towards grasping the definition and significance of the derivative, which itself forms the foundation of differential calculus.

We begin by exploring the intuitive idea behind a limit. Informally, the limit of a function $f(x)$ as the input $x$ gets arbitrarily close to some value 'a' is the value that $f(x)$ gets arbitrarily close to. It's about the destination value the function is heading towards, not necessarily the value *at* $a$. We formalize this concept using the notation $\mathbf{\lim\limits_{x \to a} f(x) = L}$, read as "the limit of $f(x)$ as $x$ approaches $a$ equals $L$". To make this notion precise, we introduce the concepts of the Left-Hand Limit (LHL), where $x$ approaches $a$ from values strictly less than $a$ ($\lim\limits_{x \to a^-} f(x)$), and the Right-Hand Limit (RHL), where $x$ approaches $a$ from values strictly greater than $a$ ($\lim\limits_{x \to a^+} f(x)$). A crucial condition is that the limit $\lim\limits_{x \to a} f(x)$ exists if and only if both the LHL and RHL exist and are equal ($LHL = RHL = L$).

Calculating limits involves various techniques. For 'well-behaved' functions like polynomials at points within their domain, we can often use direct substitution (simply plugging $x=a$ into $f(x)$). However, many situations lead to indeterminate forms like $\frac{0}{0}$ or $\frac{\infty}{\infty}$ upon direct substitution. In such cases, algebraic manipulation is required, commonly involving techniques like factorization (to cancel common factors) or rationalization (often used when square roots are present). We will also encounter and utilize some important standard limits, including:

Building upon the concept of limits, we introduce the Derivative. The derivative has powerful interpretations:

The derivative of a function $f(x)$ at a point $x=a$, denoted by $f'(a)$ or $\frac{df}{dx}|_{x=a}$, is formally defined using the concept of limits: $$ \mathbf{f'(a) = \lim\limits_{h \to 0} \frac{f(a+h) - f(a)}{h}} $$ provided this limit exists. Calculating the derivative using this limit definition is known as finding the derivative from first principles. We will practice this fundamental method to find the derivatives of basic functions like constants, $x^n$, $\sin x$, $\cos x$, and $\tan x$.

While deriving from first principles is conceptually crucial, it can be lengthy. Therefore, we establish the Algebra of Derivatives – a set of rules that allow us to find derivatives of more complex functions built from simpler ones much more efficiently:

Using these rules, particularly the product and quotient rules, combined with the derivatives of basic functions found from first principles, enables us to differentiate polynomials, rational functions, and other combinations systematically. This chapter lays the essential groundwork for all of differential calculus.



Limits

Introduction to Calculus

The invention of Calculus stands as one of the most transformative milestones in the history of mathematics. It marked a shift from static mathematics (dealing with fixed objects) to dynamic mathematics (dealing with objects in motion and changing quantities).

The Fundamental Nature of Calculus

Calculus is the branch of mathematics that primarily deals with the study of change. It focuses on how the value of a function $f(x)$ responds to infinitesimal variations in its domain variable $x$. Essentially, it provides the tools to quantify and analyze the rate of change and the accumulation of quantities.

If we consider a function $y = f(x)$, calculus helps us understand exactly how $y$ behaves as $x$ moves closer and closer to a specific point, even if the function cannot be evaluated at that exact point.

Applications in Real-World Scenarios

Calculus is not merely an abstract theory; it is the language of modern science. Its applications are vast and varied:

Scope of the Study

In this comprehensive study of Calculus, we divide our journey into two major segments: Limits and Derivatives.

1. Limits: We shall introduce the concept of the limit of a real-valued function. This includes the study of Algebra of Limits and the evaluation of limits for various function types:

2. Derivatives: We shall define the derivative of a real function and explore its geometrical interpretation (as the slope of a line) and its physical interpretation (as the instantaneous rate of change, like velocity). We will also derive the formulas for the derivatives of fundamental algebraic and trigonometric functions.


The Concept of Neighbourhood

In calculus, the concept of a neighbourhood is fundamental to defining limits, continuity, and differentiability. Intuitively, the neighbourhood of a point consists of all the points that are "close" to it. Since the set of real numbers is dense, any interval of non-zero length, no matter how small, contains infinitely many real numbers.

If we select a specific real number $c$, the real numbers situated immediately to its left and right on the number line are referred to as its neighbours.

1. Symmetric Neighbourhood

Let $c$ be a real number and $\delta$ (delta) be a very small positive real number ($\delta > 0$). The set of all real numbers lying between $c - \delta$ and $c + \delta$ is called a Symmetric Neighbourhood of $c$.

Mathematical Representation:

The symmetric neighbourhood is denoted by the open interval $(c - \delta, c + \delta)$. Any point $x$ belonging to this neighbourhood satisfies the inequality:

$c - \delta < x < c + \delta$

By subtracting $c$ from all sides, we get:

$-\delta < x - c < \delta$

This can be written using the modulus notation as:

$|x - c| < \delta$

[Distance between $x$ and $c$ is less than $\delta$]

2. Deleted Neighbourhood

In the context of limits, we often need to examine the behaviour of a function $f(x)$ as $x$ approaches $c$, but we do not care about the value of the function at $x = c$. To represent this, we use a Deleted Neighbourhood, which is simply a symmetric neighbourhood with the point $c$ removed.

Mathematical Representation:

It is the set of all $x$ such that $x \in (c - \delta, c + \delta)$ and $x \neq c$. In modulus form, since the distance $|x - c|$ must be greater than zero for $x \neq c$, it is expressed as:

$0 < |x - c| < \delta$

Geometrically, this is represented as the union of two open intervals:

$(c - \delta, c) \cup (c, c + \delta)$

3. One-Sided Neighbourhoods

Sometimes we only approach a point from one direction (either from the left/smaller values or from the right/larger values). This leads to Left and Right neighbourhoods.

(a) Left Neighbourhood

A Left $\delta$-neighbourhood of $c$ includes $c$ and points to its left. It is given by the interval $(c - \delta, c]$. If we exclude $c$, it becomes a Deleted Left $\delta$-neighbourhood, denoted as $(c - \delta, c)$.

(b) Right Neighbourhood

A Right $\delta$-neighbourhood of $c$ includes $c$ and points to its right. It is given by the interval $[c, c + \delta)$. If we exclude $c$, it becomes a Deleted Right $\delta$-neighbourhood, denoted as $(c, c + \delta)$.


Number line showing point c with delta distances on both sides

Summary Table of Intervals

Type of Neighbourhood Interval Notation Algebraic Condition
Symmetric $(c - \delta, c + \delta)$ $|x - c| < \delta$
Deleted Symmetric $(c - \delta, c) \cup (c, c + \delta)$ $0 < |x - c| < \delta$
Left (Inclusive) $(c - \delta, c]$ $c - \delta < x \leq c$
Right (Inclusive) $[c, c + \delta)$ $c \leq x < c + \delta$
Left (Deleted) $(c - \delta, c)$ $c - \delta < x < c$
Right (Deleted) $(c, c + \delta)$ $c < x < c + \delta$

Definition of Limit

The concept of a Limit is the fundamental building block of Calculus. It allows us to study the behavior of a function near a point, even if the function is not defined at that specific point. In mathematics, we are often interested in knowing what happens to a function $f(x)$ as the independent variable $x$ gets closer and closer to a particular value $a$.

Formal Definition

Let $f(x)$ be a function defined in a deleted neighbourhood of $a$. If $f(x)$ approaches a fixed real number $l$ as the variable $x$ approaches a constant $a$ (through values both smaller and larger than $a$), then $l$ is called the limit of the function $f(x)$ as $x \to a$.

In symbolic form, we represent this as follows:

$\lim\limits_{x \to a} f(x) = l$

Visual Representation of a Limit

The following graph illustrates the geometric meaning of a limit. As the value of $x$ moves along the x-axis toward the point $a$ from both the left-hand side and the right-hand side, the corresponding points on the curve $f(x)$ move toward the height $l$ on the y-axis.

Graphical representation of limit where x approaches a and f(x) approaches l from both sides

Key observations from the visual representation:

Detailed Interpretation

The notation $\lim\limits_{x \to a} f(x) = l$ carries several significant meanings:

1. $x$ approaches $a$ ($x \to a$): This means $x$ takes values very close to $a$ but never actually equals $a$. The limit is concerned with the "tendency" of the function near $a$, not the value at $a$.

2. $f(x)$ approaches $l$ ($f(x) \to l$): This implies that the difference between $f(x)$ and $l$ can be made as small as we want by choosing $x$ sufficiently close to $a$.

Important Observations

1. Existence: For the limit in the above equation to exist, the function must approach the same value $l$ regardless of whether $x$ approaches $a$ from the left side or the right side.

2. Finite Value: In the context of standard real-valued limits, $l$ must be a finite real number. If the function grows without bound, we say the limit is infinity ($\infty$), which implies the limit does not exist in the finite sense.

3. Independence: The limit value is independent of $f(a)$. The function $f(x)$ might be defined at $a$, it might be undefined, or it might be defined as a different value entirely; none of these affect the limit as $x \to a$.


Concept of Left Hand and Right Hand Limits

To understand the limit of a function comprehensively, we must analyze the function from two directions: the left side (values smaller than $a$) and the right side (values larger than $a$).

1. Left Hand Limit (LHL)

A real number $l_1$ is called the Left Hand Limit of a function $f(x)$ at $x = a$ if the values of $f(x)$ can be made as close as we desire to $l_1$ by taking points $x$ sufficiently close to $a$ from the left side (i.e., $x < a$).

Symbolically, it is written as:

$\lim\limits_{x \to a^-} f(x) = l_1$

[Approaching from $x < a$]

In other words, $l_1$ is the expected value of $f$ at $x = a$ when we look at the values of $f$ near $x$ to the left of $a$.

2. Right Hand Limit (RHL)

A real number $l_2$ is called the Right Hand Limit of a function $f(x)$ at $x = a$ if the values of $f(x)$ can be made as close as we desire to $l_2$ by taking points $x$ sufficiently close to $a$ from the right side (i.e., $x > a$).

Symbolically, it is written as:

$\lim\limits_{x \to a^+} f(x) = l_2$

[Approaching from $x > a$]

Essentially, $l_2$ is the expected value of $f$ at $x = a$ when we look at the values of $f$ near $x$ to the right of $a$.

Graph showing a function approaching a point from the left (minus) and right (plus) sides.

Existence of Limit

The limit of a function at a point $x = a$ exists only if the function approaches the same finite value from both the left and the right. If the Left Hand Limit and Right Hand Limit coincide, their common value is the limit of $f(x)$ at $x = a$.

Condition for Existence:

$\lim\limits_{x \to a^-} f(x) = \lim\limits_{x \to a^+} f(x) = l$

[Necessary and Sufficient Condition]

If $\lim\limits_{x \to a^-} f(x) \neq \lim\limits_{x \to a^+} f(x)$, we say that the limit does not exist at $x = a$.

Flowchart: Start -> Calculate LHL -> Calculate RHL -> Are they equal? -> Yes (Limit Exists) / No (Limit DNE)

Method to Evaluate LHL and RHL

The following substitution method is used to evaluate one-sided limits by introducing a small positive increment $h$.

Step-by-Step Procedure:

Step I: Identify the limit required. For LHL, we focus on $\lim\limits_{x \to a^-} f(x)$. For RHL, we focus on $\lim\limits_{x \to a^+} f(x)$.

Step II: Use the substitution method to transform the variable:

Step III: Rewrite the limit in terms of $h$ and simplify the resulting expression:

$\text{LHL} = \lim\limits_{h \to 0} f(a - h)$

$\text{RHL} = \lim\limits_{h \to 0} f(a + h)$

Step IV: Evaluate the resulting limit as $h$ approaches $0$. If both the equations above yield the same finite result, the limit exists.

Summary Table

Limit Type Notation Substitution
Left Hand Limit $\lim\limits_{x \to a^-} f(x)$ $x = a - h, \quad h > 0$
Right Hand Limit $\lim\limits_{x \to a^+} f(x)$ $x = a + h, \quad h > 0$
Existence Condition $\text{LHL} = \text{RHL}$
Concept map linking x approaching a to the substitution of h approaching 0 for both LHL and RHL.

Concept of Limit

The concept of a limit is the cornerstone of calculus. Loosely speaking, we say a function $f(x)$ has a limit $l$ as $x$ approaches $c$ if the value of $f(x)$ can be made arbitrarily close to $l$ by taking $x$ sufficiently close to $c$, but not necessarily equal to $c$.

To grasp this intuitive idea, let us examine various scenarios through numerical tables and graphical analysis.


(i) Case of a Continuous Linear Function: $f(x) = 2x + 3$

To deepen our understanding of how a function behaves in the immediate vicinity of a point, let us examine a linear function. Unlike complex functions with breaks or holes, linear functions provide a clear, continuous path that helps visualize the concept of "tending towards" a value.

Consider the function $f(x) = 2x + 3$. The domain of this function is $\mathbb{R}$. We want to investigate the behavior of $f(x)$ as the variable $x$ approaches the value $2$. Even though we can calculate $f(2)$ directly, the study of limits focuses on the values of the function around $x = 2$.

Numerical Observation from the Left (Left Hand Side)

As $x$ approaches $2$ from the left side (values smaller than $2$, such as $1.9, 1.99, 1.999$), we observe the corresponding values of $f(x)$:

$x$ (Approaching $2^-$) 1.9 1.99 1.999 1.9999
$f(x) = 2x + 3$ 6.8 6.98 6.998 6.9998

From the table above, it is evident that as $x$ gets sufficiently near to $2$ from the left, the function value $f(x)$ gets arbitrarily close to $7$. This is denoted as:

$\lim\limits_{x \to 2^-} (2x + 3) = 7$

[Left Hand Limit]

Numerical Observation from the Right (Right Hand Side)

Now, let us observe the behavior as $x$ approaches $2$ from the right side (values slightly greater than $2$, such as $2.1, 2.01, 2.001$):

$x$ (Approaching $2^+$) 2.1 2.01 2.001 2.0001
$f(x) = 2x + 3$ 7.2 7.02 7.002 7.0002

Similarly, as $x$ approaches $2$ from the right, $f(x)$ decreases and settles towards the value $7$. This is denoted as:

$\lim\limits_{x \to 2^+} (2x + 3) = 7$

[Right Hand Limit]

Conclusion and Graphical Interpretation

Since the values of the function approach the same real number (7) from both the left and the right, we say that the limit of $f(x)$ at $x = 2$ exists and is equal to $7$.

$\lim\limits_{x \to 2} (2x + 3) = 7$

In the graph of a linear function, this corresponds to the point $(2, 7)$ on the straight line. As your finger moves along the line towards $x = 2$ from either direction, the height (the y-value) of your finger approaches $7$.

Graph of 2x + 3 showing the limit as x approaches 2

Note that for a continuous function like this linear equation, the limit value is exactly the same as the function value, i.e., $f(2) = 2(2) + 3 = 7$. However, as we will see in further examples, this is not always the case for all functions.


(ii) Investigation of quadratic function $f(x) = x^2$

To understand the behavior of non-linear functions, let us perform an elaborative study of the most fundamental quadratic function, $f(x) = x^2$. This function represents a parabola that is symmetric about the y-axis.

Our objective is to investigate the function values of $f(x)$ as the variable $x$ approaches the origin ($x \to 0$). Unlike linear functions, the rate at which $f(x)$ changes here is not constant, which makes the study of its limit particularly interesting.

Numerical Investigation of $f(x) = x^2$

The domain of $f(x) = x^2$ is the set of all real numbers $\mathbb{R}$. We shall observe the "tendency" of the function by approaching $0$ from two distinct directions: the Left Hand Side (negative values) and the Right Hand Side (positive values).

1. Left Hand Limit (LHL) - Approaching from $x < 0$

As $x$ takes values closer to $0$ from the negative side (e.g., $-0.1, -0.01$), the value of $x^2$ remains positive because the square of a negative number is always positive. We observe this behavior in the table below:

$x$ (Approaching $0^-$) $-0.1$ $-0.01$ $-0.001$ $-0.0001$
$f(x) = x^2$ $0.01$ $0.0001$ $0.000001$ $0.00000001$

As $x$ gets arbitrarily close to $0$, the values of $f(x)$ are heading towards $0$. Mathematically, we state:

$\lim\limits_{x \to 0^-} x^2 = 0$

[Approaching from Left]

2. Right Hand Limit (RHL) - Approaching from $x > 0$

Now, let us consider $x$ approaching $0$ from the positive side. The values decrease towards $0$, and their squares also decrease as shown below:

$x$ (Approaching $0^+$) $0.1$ $0.01$ $0.001$ $0.0001$
$f(x) = x^2$ $0.01$ $0.0001$ $0.000001$ $0.00000001$

From the right side, the function values also settle at $0$. Mathematically, we state:

$\lim\limits_{x \to 0^+} x^2 = 0$

[Approaching from Right]

Geometrical Interpretation

The graph of $f(x) = x^2$ is a standard upward-opening parabola. When we move along the curve from the far left toward the y-axis, the "height" of the curve drops. Similarly, moving from the far right toward the y-axis, the height also drops.

Both paths lead to the Vertex of the parabola situated at $(0, 0)$. The fact that both "one-sided paths" meet at the same height ($y=0$) confirms that the limit exists at that point.

Graph of the parabola f(x) = x^2 showing limit approaching origin

Existence and Conclusion

For any function $f(x)$, the limit at a point exists if and only if the Left Hand Limit and Right Hand Limit are equal. In our elaborative study of $f(x) = x^2$ at $x = 0$:

$\text{LHL} = \text{RHL} = 0$

Therefore, we conclude that the limit exists and is given by:

$\lim\limits_{x \to 0} x^2 = 0$

Polynomial functions like $x^2$ are continuous everywhere. This means for any real number $a$, the limit is simply the value obtained by direct substitution. For $x \to 0$, $0^2 = 0$. This property is extensively used to solve complex limits by reducing them into polynomial forms.


(iii) Investigation of $f(x) = \frac{x^2 - 4}{x - 2}$

In our study of limits, one of the most critical scenarios occurs when direct substitution into a function results in an undefined expression, such as $\frac{0}{0}$. These are known as Indeterminate Forms. In such cases, the limit of the function often exists even if the function itself is undefined at that specific point.

Consider the rational function $f(x) = \frac{x^2 - 4}{x - 2}$. The first step in any limit problem is to check the Domain of the function. Here, the denominator becomes zero when $x = 2$. Therefore, the domain of $f$ is $D_f = \mathbb{R} \setminus \{2\}$.

The Problem of Direct Substitution

If we attempt to find the value of the function at $x = 2$ by direct substitution, we encounter a mathematical difficulty:

$f(2) = \frac{2^2 - 4}{2 - 2} = \frac{0}{0}$

[Indeterminate Form]

In mathematics, the form $\frac{0}{0}$ does not have a defined value. However, the limit of the function as $x$ approaches $2$ (but is not equal to 2) provides us with the expected value of the function near that point.

Algebraic Derivation and Simplification

To evaluate the limit, we look for common factors in the numerator and denominator that cause the indeterminate form. Since the function is being evaluated for $x$ in the deleted neighbourhood of $2$, we know that $x \neq 2$, which means the term $(x - 2) \neq 0$.

Using the algebraic identity $a^2 - b^2 = (a - b)(a + b)$, we factorise the numerator:

$x^2 - 4 = (x - 2)(x + 2)$

Substituting this into our function:

$f(x) = \frac{\cancel{(x - 2)}^{1}(x + 2)}{\cancel{(x - 2)}_{1}}$

[Cancelling common factor as $x \neq 2$]

Thus, for all values of $x$ except $2$, the function simplifies to:

$f(x) = x + 2$

Numerical Verification

To confirm our algebraic result, let us observe the function values as $x$ approaches $2$ from both sides using a horizontal data table.

$x$ 1.9 1.99 1.999 2.001 2.01 2.1
$f(x)$ 3.9 3.99 3.999 4.001 4.01 4.1

As $x \to 2^-$ (from the left), $f(x) \to 4$. Similarly, as $x \to 2^+$ (from the right), $f(x) \to 4$.

Graphical Representation: The "Point-Circle" Discontinuity

Geometrically, the graph of $f(x) = \frac{x^2 - 4}{x - 2}$ looks exactly like the straight line $y = x + 2$, but with a significant difference: there is a "hole" or a removable discontinuity at the point $(2, 4)$.

Graph of y=x+2 with a hollow circle at x=2

The hollow circle at $x = 2$ indicates that the function is not defined there. However, the two paths (from the left and right) both lead directly to the coordinates $(2, 4)$. Hence, the limit exists.

Conclusion

Based on the algebraic simplification and numerical trend, we conclude:

$\lim\limits_{x \to 2} \frac{x^2 - 4}{x - 2} = 4$

This example teaches us that the limit is independent of the value of the function at that point.


(iv): Investigation of a Piecewise Defined Function

A common misconception in calculus is that the limit of a function at a point must always equal the value of the function at that point. However, these are two entirely independent concepts. The Limit describes the behavior of a function in the deleted neighbourhood of a point (how it behaves near the point), while the Function Value describes the actual output at that specific point.

Let us consider a function $f(x)$ defined in a piecewise manner:

$f(x) = \begin{cases} \frac{x^2 - 1}{x + 1} & , & x \neq -1 \\ 2 & , & x = -1 \end{cases}$

Here, the function is explicitly defined at every real number. Our goal is to compare the limit of the function as $x \to -1$ with the actual value $f(-1)$.

Step 1: Evaluating the Limit ($x \to -1$)

To find the limit, we examine the function for values of $x$ close to $-1$, but not equal to $-1$. For all such values, we use the expression $\frac{x^2 - 1}{x + 1}$.

We can simplify this expression using the algebraic identity $a^2 - b^2 = (a - b)(a + b)$:

$f(x) = \frac{(x - 1)(x + 1)}{x + 1}$

[Factorizing the numerator]

$f(x) = x - 1$

[As $x \neq -1$, we cancel $(x + 1)$]

Now, as $x$ approaches $-1$, the value of $(x - 1)$ approaches $(-1 - 1) = -2$. Therefore:

$\lim\limits_{x \to -1} f(x) = -2$

Step 2: Identifying the Function Value

From the original definition of the function, the value at the point $x = -1$ is directly provided:

$f(-1) = 2$

(Given)

Step 3: Comparison and Conclusion

Comparing the results from equation provided above and the given value, we observe:

$\lim\limits_{x \to -1} f(x) \neq f(-1)$

[$-2 \neq 2$]

This result leads to a crucial conclusion in Calculus: The limit of a function at a point can exist and be finite, yet be completely different from the actual value of the function at that point.

Numerical Representation

Let us look at the numerical data to see the "intended path" versus the "actual point":

$x$ $-1.01$ $-1.001$ $-1$ (Exact) $-0.999$ $-0.99$
$f(x)$ $-2.01$ $-2.001$ 2 $-1.999$ $-1.99$

Note how the values of $f(x)$ are "aiming" for $-2$ from both sides, but the value exactly at $-1$ jumps to $2$.

Graphical Visualization

Geometrically, the graph consists of a straight line $y = x - 1$ with a hollow circle at the point $(-1, -2)$. This indicates that the trend of the line leads to $-2$. However, there is a solid isolated point plotted at $(-1, 2)$ on the graph.

Graph showing a line with a hole and a separate point representing the function value

This scenario is described as a Removable Discontinuity (Isolated Point Discontinuity). For a function to be continuous at a point $x = c$, three conditions must be met:

  1. $\lim\limits_{x \to c} f(x)$ must exist.
  2. $f(c)$ must be defined.
  3. $\lim\limits_{x \to c} f(x) = f(c)$.

In our example, condition 3 is violated, making the function discontinuous at $x = -1$.

(v) Non-existence of a Limit (Analysis of Jump Discontinuity)

For a limit of a function to exist at a point $x = c$, it is mandatory that the function approaches a single, unique, and finite value from both directions. If the function shows a "split" behavior where it heads toward different values depending on the direction of approach, the limit is said to not exist.

Consider a piecewise defined function $f(x)$ given by:

$f(x) = \begin{cases} x - 2 & , & x < 0 \\ x + 2 & , & x > 0 \end{cases}$

In this scenario, we are interested in finding the limit as $x \to 0$.

Evaluation using the Substitution Method

To find the limit rigorously, we evaluate the Left Hand Limit (LHL) and the Right Hand Limit (RHL) separately using the $h \to 0$ method.

1. Evaluation of Left Hand Limit (LHL)

We approach $x = 0$ from values smaller than $0$. Let $x = 0 - h$, where $h > 0$ is an infinitesimally small number.

$\text{LHL} = \lim\limits_{x \to 0^-} f(x) = \lim\limits_{h \to 0} f(0 - h)$

Since $x < 0$, we use the definition $f(x) = x - 2$:

$\text{LHL} = \lim\limits_{h \to 0} [(-h) - 2]$

[Substituting $x = -h$]

$\text{LHL} = -2$

2. Evaluation of Right Hand Limit (RHL)

We approach $x = 0$ from values greater than $0$. Let $x = 0 + h$, where $h > 0$.

$\text{RHL} = \lim\limits_{x \to 0^+} f(x) = \lim\limits_{h \to 0} f(0 + h)$

Since $x > 0$, we use the definition $f(x) = x + 2$:

$\text{RHL} = \lim\limits_{h \to 0} [(h) + 2]$

[Substituting $x = h$]

$\text{RHL} = 2$

Numerical and Graphical Evidence

Let us observe the function values near the "break point" $x = 0$ to visualize the gap.

$x$ $-0.1$ $-0.01$ $0$ $0.01$ $0.1$
$f(x)$ $-2.1$ $-2.01$ Undefined $2.01$ $2.1$

As seen in the table, there is a sudden jump in the values of $f(x)$ from $-2.01$ to $2.01$ as we cross zero. This is geometrically represented as a Jump Discontinuity.

Graph of piecewise function showing a gap or jump at x=0

Final Conclusion

By comparing the one-sided limits calculated in equations (ii) and (iv), we find:

$\text{LHL} \neq \text{RHL}$

[$-2 \neq 2$]

Since the condition for the existence of a limit is violated, we formally state that:

$\lim\limits_{x \to 0} f(x)$ does not exist.


Formal Theorem of Existence of Limits

The existence of a limit is the primary condition that must be verified before any mathematical evaluation. This theorem is regarded as the "Necessary and Sufficient Condition" for a limit to be defined at a specific point.

Statement of the Theorem

A real-valued function $f(x)$ is said to have a limit $l$ as $x$ approaches a constant $c$ if and only if both the Left Hand Limit (LHL) and the Right Hand Limit (RHL) exist independently, are finite, and are equal to the same value $l$.

Mathematically, the existence is defined by the following bi-conditional relationship:

$\lim\limits_{x \to c^-} f(x) = \lim\limits_{x \to c^+} f(x) = l \iff \lim\limits_{x \to c} f(x) = l$

Detailed Components of Existence

1. Left Hand Limit (LHL)

The LHL represents the value the function approaches as $x$ gets closer to $c$ from the left side ($x < c$). In solving problems, we use the substitution $x = c - h$.

$\text{LHL} = \lim\limits_{h \to 0} f(c - h)$

[where $h > 0$]

2. Right Hand Limit (RHL)

The RHL represents the value the function approaches as $x$ gets closer to $c$ from the right side ($x > c$). We use the substitution $x = c + h$.

$\text{RHL} = \lim\limits_{h \to 0} f(c + h)$

[where $h > 0$]

Criteria for Non-Existence

A limit $\lim\limits_{x \to c} f(x)$ is said to not exist if any of the following conditions occur:

Summary of Existence

Scenario Condition Conclusion
Standard Existence $\text{LHL} = \text{RHL} = \text{Finite Value}$ Limit Exists
Jump Discontinuity $\text{LHL} \neq \text{RHL}$ Limit Does Not Exist
Infinite Limit $\text{LHL}$ or $\text{RHL} = \pm \infty$ Limit Does Not Exist (Finitely)
Oscillatory Behavior No unique value approached Limit Does Not Exist


Standard Results on Limits

In calculus, the evaluation of limits is fundamental to understanding continuity and differentiability. To simplify the process of finding limits for complex functions, we utilize several standard results and algebraic properties. These results allow us to break down complicated expressions into simpler components whose limits are easier to compute.


Basic Limits of Elementary Functions

In the study of calculus, understanding how elementary functions behave as the independent variable approaches a specific value is crucial. These basic limits serve as the building blocks for more complex evaluations involving trigonometric, logarithmic, and exponential functions.

1. Limit of a Constant Function

A constant function is a function whose output value is the same for every input value. Since the value of the function does not depend on $x$, its value remains unchanged regardless of what value $x$ approaches.

Result: If $f(x) = \alpha$, where $\alpha$ is a fixed real number, then for any real number $c$:

$\lim\limits_{x \to c} \alpha = \alpha$

[$\alpha$ is independent of $x$]

For example, if we consider the constant function $f(x) = 5$, then $\lim\limits_{x \to 10} 5 = 5$.

2. Limit of a Power Function

For a power function of the form $f(x) = x^n$, where $n$ is a natural number, the function is continuous over the set of real numbers. This allows for the evaluation of the limit through direct substitution.

Theorem: For all $n \in N$ and any real number $c$:

$\lim\limits_{x \to c} x^n = c^n$

Explanation: This follows from the product rule of limits. Since $\lim\limits_{x \to c} x = c$, then:

$\lim\limits_{x \to c} x^n = \lim\limits_{x \to c} (x \cdot x \cdot ... \cdot x)$

$= \lim\limits_{x \to c} x \cdot \lim\limits_{x \to c} x \cdot ... \cdot \lim\limits_{x \to c} x$

$= c \cdot c \cdot ... \cdot c = c^n$

3. Limit of a Polynomial Function

A polynomial function $f(x)$ is defined as an expression consisting of variables and coefficients, involving only the operations of addition, subtraction, multiplication, and non-negative integer exponents of variables.

Let $f(x) = a_n x^n + a_{n-1} x^{n-1} + ... + a_1 x + a_0$ be a polynomial. The limit of this function as $x \to c$ is:

$\lim\limits_{x \to c} f(x) = f(c)$

Derivation using Algebra of Limits:

By using the Sum Rule of limits, we can distribute the limit across each term of the polynomial:

$\lim\limits_{x \to c} (a_n x^n + a_{n-1} x^{n-1} + ... + a_0)$

$= \lim\limits_{x \to c} a_n x^n + \lim\limits_{x \to c} a_{n-1} x^{n-1} + ... + \lim\limits_{x \to c} a_0$

Applying the Scalar Multiple Rule ($\lim \alpha f(x) = \alpha \lim f(x)$):

$= a_n \lim\limits_{x \to c} x^n + a_{n-1} \lim\limits_{x \to c} x^{n-1} + ... + a_0$

Substituting the results from the Power Function limit:

$= a_n c^n + a_{n-1} c^{n-1} + ... + a_0 = f(c)$

4. Limit of Absolute Value Function

The absolute value function $|x|$ is defined as:

$|x| = \begin{cases} x & , & x \geq 0 \\ -x & , & x < 0 \end{cases}$

For any real number $c$, the limit of the absolute value function is always the absolute value of $c$.

$\lim\limits_{x \to c} |x| = |c|$

Case I: If $c > 0$

In a small neighbourhood of $c$, $x$ is positive, so $|x| = x$. Thus, $\lim\limits_{x \to c} x = c = |c|$.

Case II: If $c < 0$

In a small neighbourhood of $c$, $x$ is negative, so $|x| = -x$. Thus, $\lim\limits_{x \to c} (-x) = -c$. Since $c$ is negative, $-c$ is positive, which is equal to $|c|$.

Case III: If $c = 0$

LHL: $\lim\limits_{x \to 0^-} (-x) = 0$

RHL: $\lim\limits_{x \to 0^+} (x) = 0$

Since LHL = RHL, the limit exists and is equal to $0$, which is $|0|$.

Example 1. Evaluate the limit of the polynomial function $f(x) = 4x^2 + 3x - 5$ as $x$ approaches $2$.

Answer:

Given: $f(x) = 4x^2 + 3x - 5$ and $c = 2$.

Using the property $\lim\limits_{x \to c} f(x) = f(c)$ for polynomials:

$\lim\limits_{x \to 2} (4x^2 + 3x - 5) = 4(2)^2 + 3(2) - 5$

$ = 4(4) + 6 - 5$

$ = 16 + 6 - 5$

$ = 22 - 5 = 17$

The value of the limit is $17$.


Algebra of Limits

The Algebra of Limits provides a set of rules that allow us to calculate the limits of complex functions by breaking them down into simpler parts. These rules are valid provided that the individual limits of the functions involved exist independently.

Let $f$ and $g$ be two real-valued functions defined on a common domain. Suppose that the limits of these functions as $x$ approaches a real number $c$ exist and are given by:

$\lim\limits_{x \to c} f(x) = l$

... (i)

$\lim\limits_{x \to c} g(x) = m$

... (ii)

Standard Algebraic Properties

Based on the assumptions in equations (i) and (ii), the following properties hold true:

Operation Mathematical Expression Result
Scalar Multiple $\lim\limits_{x \to c} [\alpha \cdot f(x)]$ $\alpha \cdot l$
Sum Rule $\lim\limits_{x \to c} [f(x) + g(x)]$ $l + m$
Difference Rule $\lim\limits_{x \to c} [f(x) - g(x)]$ $l - m$
Product Rule $\lim\limits_{x \to c} [f(x) \cdot g(x)]$ $l \cdot m$
Quotient Rule $\lim\limits_{x \to c} \frac{f(x)}{g(x)}$ $\frac{l}{m}$ ($m \neq 0$)
Reciprocal Rule $\lim\limits_{x \to c} \frac{1}{f(x)}$ $\frac{1}{l}$ ($l \neq 0$)
Power Rule $\lim\limits_{x \to c} [f(x)]^n$ $l^n$ ($\forall \ n \in N$)

Detailed Explanation of Rules

1. Limit of a Scalar Multiple

The limit of a constant times a function is the constant times the limit of the function. This means the constant $\alpha$ can be moved "outside" the limit operation.

2. Sum and Difference Rules

The limit of the sum (or difference) of two functions is equal to the sum (or difference) of their individual limits. This property allows us to evaluate limits term-by-term in expressions like $x^2 + 5x$.

3. Product and Quotient Rules

The limit of a product of functions is the product of their limits. Similarly, the limit of a quotient is the quotient of their limits, provided the limit of the denominator is not zero. If the denominator limit $m = 0$, the quotient rule cannot be applied directly and may result in indeterminate forms like $0/0$ or $\infty/\infty$.

The Converse Property of Algebra of Limits

The standard rules for the Algebra of Limits (Sum, Difference, Product, and Quotient rules) are conditional theorems. They state that if the individual limits $\lim\limits_{x \to c} f(x)$ and $\lim\limits_{x \to c} g(x)$ exist, then the limit of their combination also exists. However, the converse of these statements is not necessarily true.

Converse of the Sum and Difference Rule

The existence of $\lim\limits_{x \to c} [f(x) \pm g(x)]$ does not imply that $\lim\limits_{x \to c} f(x)$ and $\lim\limits_{x \to c} g(x)$ exist individually.

Case Study: Infinite Limits

Consider the functions $f(x)$ and $g(x)$ as $x$ approaches $0$:

$f(x) = \frac{1}{x^2}$

[Limit is $\infty$, hence doesn't exist]

$g(x) = -\frac{1}{x^2}$

[Limit is $-\infty$, hence doesn't exist]

Now, let us evaluate the limit of their sum:

$\lim\limits_{x \to 0} [f(x) + g(x)] = \lim\limits_{x \to 0} \left[ \frac{1}{x^2} - \frac{1}{x^2} \right]$

$\lim\limits_{x \to 0} [0] = 0$

... (i)

Even though the resulting limit in eq. (i) exists and is $0$, the individual functions $f(x)$ and $g(x)$ are discontinuous and their limits do not exist at $x=0$.

Converse of the Product Rule

The existence of $\lim\limits_{x \to c} [f(x) \cdot g(x)]$ does not imply that the individual limits exist.

Case Study: Signum Function

Let $f(x) = \text{sgn}(x)$ and $g(x) = \text{sgn}(x)$, where $\text{sgn}(x)$ is the signum function.

At $x = 0$:

LHL of $f(x) = -1$

$\lim\limits_{x \to 0^-} \text{sgn}(x)$

RHL of $f(x) = 1$

$\lim\limits_{x \to 0^+} \text{sgn}(x)$

Since LHL $\neq$ RHL, $\lim\limits_{x \to 0} f(x)$ does not exist. The same applies to $g(x)$.

However, let's examine their product $h(x) = f(x) \cdot g(x)$:

$h(x) = [\text{sgn}(x)]^2 = 1$

$\forall \ x \neq 0$

$\lim\limits_{x \to 0} [f(x) \cdot g(x)] = \lim\limits_{x \to 0} (1) = 1$

... (ii)

The product's limit exists (equal to $1$), while individual limits fail to exist.

Converse of the Quotient Rule

The existence of $\lim\limits_{x \to c} \frac{f(x)}{g(x)}$ does not guarantee individual existence.

Consider $f(x) = \frac{1}{x}$ and $g(x) = \frac{1}{x}$. Neither limit exists at $x \to 0$. However:

$\lim\limits_{x \to 0} \frac{f(x)}{g(x)} = \lim\limits_{x \to 0} \frac{1/x}{1/x} = \lim\limits_{x \to 0} (1) = 1$

Important Summary Table

The following table summarizes the relationship between individual limits and combined limits:

If individual limits exist... Then combined limit... If combined limit exists... Then individual limits...
$\lim f$ and $\lim g$ exist Always exists $\lim (f+g)$ exists May or may not exist
$\lim f$ exists, $\lim g$ DNE Never exists (for sum) $\lim (f \cdot g)$ exists May or may not exist
$\lim f$ DNE, $\lim g$ DNE May exist $\lim (f/g)$ exists May or may not exist

Example 1. Evaluate $\lim\limits_{x \to 1} \frac{x^2 + 3x + 2}{x + 1}$ using the algebra of limits.

Answer:

We check if the limit of the denominator is non-zero:

$\lim\limits_{x \to 1} (x + 1) = 1 + 1 = 2$

(By direct substitution)

Since the denominator limit is $2 \neq 0$, we can apply the Quotient Rule:

Limit $= \frac{\lim\limits_{x \to 1} (x^2 + 3x + 2)}{\lim\limits_{x \to 1} (x + 1)}$

Applying the Sum Rule to the numerator:

Limit $= \frac{1^2 + 3(1) + 2}{2}$

Limit $= \frac{1 + 3 + 2}{2} = \frac{6}{2}$

Limit $= 3$

Final Answer: The value of the limit is $3$.


Example 2. If $\lim\limits_{x \to 2} f(x) = 5$ and $\lim\limits_{x \to 2} g(x) = 10$, find $\lim\limits_{x \to 2} [3f(x) + g(x)]$.

Answer:

Using the Sum Rule and Scalar Multiple Rule:

$\lim\limits_{x \to 2} [3f(x) + g(x)] = 3 \lim\limits_{x \to 2} f(x) + \lim\limits_{x \to 2} g(x)$

Substituting the given values:

$= 3(5) + 10$

$= 15 + 10 = 25$

The resulting value is $25$ (or $\textsf{₹} \ 25$ in context).


The Sandwich Theorem (Squeeze Principle)

The Sandwich Theorem, also frequently referred to as the Squeeze Principle or the Pinching Theorem, is a powerful tool in calculus used to find the limit of a function that is difficult to evaluate directly. By "sandwiching" the complex function between two other functions whose limits are known and equal, we can determine the limit of the intermediate function.

Statement of the Theorem

Let $f, g,$ and $h$ be three real-valued functions defined on an interval $D$. Suppose that for all $x \in D$ (except possibly at $x = c$):

$f(x) \leq g(x) \leq h(x)$

If the limits of the outer functions $f(x)$ and $h(x)$ as $x$ approaches $c$ are equal to some value $l$:

$\lim\limits_{x \to c} f(x) = l = \lim\limits_{x \to c} h(x)$

Then, the limit of the "squeezed" function $g(x)$ must also be $l$:

$\lim\limits_{x \to c} g(x) = l$

Geometrical Interpretation

Imagine the graphs of $f(x)$ and $h(x)$. Since $f(x) \leq g(x) \leq h(x)$, the graph of $g(x)$ is always trapped between the graphs of $f(x)$ and $h(x)$. As $x$ approaches $c$, both $f(x)$ and $h(x)$ converge to the same point $l$. Consequently, the function $g(x)$ is forced or "squeezed" to approach the same point $l$.

Graph showing f(x) and h(x) squeezing g(x) at point c

Product of Infinitesimal and Bounded Functions

In calculus, an infinitesimal refers to a function that approaches zero as its variable approaches a certain value. A bounded function is one whose values stay within a specific finite range. One of the most useful results in limit evaluation is the product of these two types of functions.

Statement of the Theorem

If $f$ and $g$ are two functions such that:

(i) $\lim\limits_{x \to c} f(x) = 0$ (Function $f$ is an infinitesimal at $x = c$)

(ii) $g(x)$ is bounded in a deleted neighbourhood of $c$

Then, the limit of their product is zero:

$\lim\limits_{x \to c} [f(x) \cdot g(x)] = 0$

Detailed Concept of Boundedness

A function $g(x)$ is said to be bounded on an interval if its range is finite. Mathematically, there exist real numbers $k_1$ and $k_2$ such that for all $x$ in the domain:

$k_1 \leq g(x) \leq k_2$

(Definition of Boundedness)

This can also be expressed using absolute values. There exists a positive constant $M$ such that:

$|g(x)| \leq M$

Derivation of the Result

We can prove this result using the Sandwich Theorem. Let $g(x)$ be a bounded function such that $|g(x)| \leq M$ for some $M > 0$.

Step 1: Consider the absolute value of the product:

$|f(x) \cdot g(x)| = |f(x)| \cdot |g(x)|$

Step 2: Substitute the bound of $g(x)$:

$|f(x) \cdot g(x)| \leq |f(x)| \cdot M$

[Since $|g(x)| \leq M$]

Step 3: Rewrite as a double inequality:

$-M|f(x)| \leq f(x)g(x) \leq M|f(x)|$

Step 4: Apply limits to the bounding functions as $x \to c$:

$\lim\limits_{x \to c} (-M|f(x)|) = -M \cdot 0 = 0$

[Given $\lim f(x) = 0$]

$\lim\limits_{x \to c} (M|f(x)|) = M \cdot 0 = 0$

Step 5: By Sandwich Theorem:

$\lim\limits_{x \to c} (f(x) \cdot g(x)) = 0$


Change of Variable Method

The Change of Variable Method is a fundamental technique used to simplify the evaluation of limits, especially when a direct substitution results in an indeterminate form.

The Principle of Substitution

If we wish to evaluate a limit as $x \to c$, we define a new variable $h$ such that it represents the difference between $x$ and $c$.

General Substitution Steps:

Let $x = c + h$. As the value of $x$ gets closer and closer to $c$, the value of the difference $h$ gets closer to $0$.

As $x \to c, \ h \to 0$

[Since $h = x - c$]

Thus, the limit expression is transformed as follows:

$\lim\limits_{x \to c} f(x) = \lim\limits_{h \to 0} f(c + h)$

One-Sided Limits using $h$-Method

In many complex functions, particularly those involving modulus, greatest integer functions, or signum functions, the limit may behave differently from the left and the right. The change of variable method is the standard way to evaluate these.

(i) Right Hand Limit (RHL)

To find the RHL, we approach $c$ from values slightly greater than $c$. We substitute $x = c + h$, where $h$ is a very small positive quantity ($h > 0$).

$\text{RHL} = \lim\limits_{x \to c^+} f(x) = \lim\limits_{h \to 0} f(c + h)$

(ii) Left Hand Limit (LHL)

To find the LHL, we approach $c$ from values slightly smaller than $c$. We substitute $x = c - h$, where $h$ is a very small positive quantity ($h > 0$).

$\text{LHL} = \lim\limits_{x \to c^-} f(x) = \lim\limits_{h \to 0} f(c - h)$

Condition for Existence: The limit $\lim\limits_{x \to c} f(x)$ exists if and only if:

$\text{LHL} = \text{RHL}$



Theorems on Limits

To evaluate limits efficiently, especially in algebraic and trigonometric contexts, certain fundamental theorems are employed. These theorems simplify the process by providing standard results and allowing for transformations that make functions easier to handle. In this section, we discuss the relationship between left-hand and right-hand limits through symmetry and the derivation of the power rule for limits.


Theorem 1: Symmetry and Change of Variable

This theorem is particularly useful when dealing with left-hand limits. It allows us to convert a limit approaching zero from the negative side into a limit approaching zero from the positive side by reflecting the function across the y-axis.

Statement:

$\lim\limits_{x \to 0^-} f(x) = \lim\limits_{x \to 0^+} f(-x)$

Proof:

Let us introduce a new variable to shift the direction of the approach.

Step 1: Let $x = -y$.

Step 2: Determine the behavior of the new variable. As $x \to 0^-$, the value of $x$ is a very small negative number. Since $y = -x$, $y$ must be a very small positive number.

As $x \to 0^-, \ y \to 0^+$

Step 3: Substitute these values into the limit expression:

$\lim\limits_{x \to 0^-} f(x) = \lim\limits_{y \to 0^+} f(-y)$

By merely changing the dummy variable $y$ back to $x$, we get:

$\lim\limits_{x \to 0^-} f(x) = \lim\limits_{x \to 0^+} f(-x)$


Theorem 2: The Power Rule for Limits

This is one of the most frequently used results for evaluating limits of algebraic fractions. It deals with the limit of the ratio of the difference of powers to the difference of the bases.

Statement:

For any natural number $n \in N$ and a real number $a$:

$\lim\limits_{x \to a} \frac{x^n - a^n}{x - a} = n a^{n-1}$

Proof using Binomial Expansion:

We use the Change of Variable Method to shift the limit to approach zero.

Given: Let $x = a + h$. As $x \to a$, it implies that $h \to 0$.

Substituting $x$ in the expression:

Limit $= \lim\limits_{h \to 0} \frac{(a+h)^n - a^n}{(a+h) - a}$

$= \lim\limits_{h \to 0} \frac{(a+h)^n - a^n}{h}$

Using the Binomial Theorem for a positive integral index $n$ to expand $(a+h)^n$:

$(a+h)^n = a^n + n a^{n-1}h + \frac{n(n-1)}{2!} a^{n-2}h^2 + ... + h^n$

Substituting this expansion into equation provided above:

Limit $= \lim\limits_{h \to 0} \frac{\left[ a^n + n a^{n-1}h + \frac{n(n-1)}{2!} a^{n-2}h^2 + ... + h^n \right] - a^n}{h}$

Subtracting $a^n$ and dividing by $h$ (where $h \neq 0$):

Limit $= \lim\limits_{h \to 0} \frac{h \left[ n a^{n-1} + \frac{n(n-1)}{2!} a^{n-2}h + ... + h^{n-1} \right]}{h}$

Limit $= \lim\limits_{h \to 0} \left[ n a^{n-1} + \frac{n(n-1)}{2!} a^{n-2}h + ... + h^{n-1} \right]$

Taking the limit as $h \to 0$, all terms containing $h$ become zero:

Limit $= n a^{n-1} + 0 + 0 + ... + 0$

Limit $= n a^{n-1}$

Important Remarks

1. Generalization: The result $\lim\limits_{x \to a} \frac{x^n - a^n}{x - a} = n a^{n-1}$ is true not only for natural numbers but for any rational number $n$.

2. Existence Condition: It is assumed that the function is defined in the deleted neighbourhood of $a$. If the function is not defined near $a$, the limit does not exist.

3. Alternative Proof: This result can also be quickly verified using L'Hôpital's Rule (differentiating numerator and denominator):

$\lim\limits_{x \to a} \frac{\frac{d}{dx}(x^n - a^n)}{\frac{d}{dx}(x - a)} = \lim\limits_{x \to a} \frac{nx^{n-1}}{1} = n a^{n-1}$

Evaluation of Limits of Algebraic Functions

Evaluating limits of algebraic functions usually involves one of the following methods:

(i) Direct Substitution: If $f(a)$ is defined and not an indeterminate form.

(ii) Factorization: Cancelling out the common factor $(x-a)$ from the numerator and denominator.

(iii) Rationalization: Used when square roots are present in the expression.

(iv) Standard Result: Applying Theorem 2 directly.


Example 1. Evaluate: $\lim\limits_{x \to 3} \frac{x^4 - 81}{x - 3}$

Answer:

We can rewrite the expression to match the standard result format:

$\frac{x^4 - 81}{x - 3} = \frac{x^4 - 3^4}{x - 3}$

Using Theorem 2: $\lim\limits_{x \to a} \frac{x^n - a^n}{x - a} = n a^{n-1}$

Here, $a = 3$ and $n = 4$.

Limit $= 4 \cdot (3)^{4-1}$

$= 4 \cdot 3^3 = 4 \cdot 27$

$= 108$

Final Answer: The value of the limit is $108$.


Example 2. Evaluate: $\lim\limits_{x \to 1} \frac{x^{15} - 1}{x^{10} - 1}$

Answer:

To use the standard theorem, we divide both the numerator and denominator by $(x - 1)$:

Limit $= \frac{\lim\limits_{x \to 1} \frac{x^{15} - 1^{15}}{x - 1}}{\lim\limits_{x \to 1} \frac{x^{10} - 1^{10}}{x - 1}}$

Applying Theorem 2 to both parts:

Numerator limit $= 15 \cdot (1)^{14} = 15$

Denominator limit $= 10 \cdot (1)^9 = 10$

Therefore:

Limit $= \frac{15}{10} = \frac{3}{2} = 1.5$

Final Answer: The value of the limit is $1.5$.



Theorem on Limits of Trigonometric Functions

Trigonometric limits form the core of calculus, enabling the derivation of derivatives for circular functions. These theorems establish how functions like sine, cosine, and tangent behave as their arguments approach zero or any specific real constant.


Evaluation of Trigonometric Limits as $x \to 0$

In calculus, the limits of trigonometric functions as the angle approaches zero are considered fundamental limits. These results are not only used to evaluate complex indeterminate forms but also serve as the foundation for the differentiation of trigonometric functions. We will elaborately prove why these functions behave the way they do as $x$ (measured in radians) becomes infinitesimally small.

1. Limit of the Sine Function: $\lim\limits_{x \to 0} \sin x = 0$

To prove this result, we use a geometric approach involving a unit circle and the Squeeze Principle.

Given:

A circle with center $O$ and radius $r = 1$ unit. Let $\angle AOP = x$ radians, where $0 < x < \frac{\pi}{2}$.

Construction:

Draw $MP \perp OA$ from point $P$ on the circle to the radius $OA$. Join the chord $AP$. This creates a right-angled triangle $\Delta OMP$ and a circular sector $OAP$.

Unit circle showing arc x, triangle OMP and sector OAP

Proof:

In the right-angled triangle $\Delta OMP$:

$\sin x = \frac{\text{Perpendicular}}{\text{Hypotenuse}} = \frac{MP}{OP}$

$\sin x = \frac{MP}{1} = MP$

[Since $OP$ is radius $= 1$]           ... (i)

Now, we compare the area of the triangle and the area of the sector. From the geometry of the figure, it is visually and mathematically clear that the area of the triangle is strictly less than the area of the sector:

$\text{Area of } \Delta OAP < \text{Area of Sector } OAP$

Using the formulas: $\text{Area}(\Delta) = \frac{1}{2} \times \text{base} \times \text{height}$ and $\text{Area}(\text{Sector}) = \frac{1}{2} r^2 x$

$\frac{1}{2} \cdot OA \cdot MP < \frac{1}{2} \cdot (1)^2 \cdot x$

$\frac{1}{2} \cdot 1 \cdot \sin x < \frac{1}{2} \cdot x$

[Substituting values from (i)]

$\sin x < x$

Since we are in the first quadrant ($0 < x < \frac{\pi}{2}$), the value of $\sin x$ is always greater than $0$. Thus, we can bound the function as follows:

$0 < \sin x < x$

Applying the Squeeze Principle (Sandwich Theorem) as $x$ approaches $0$ from the right ($x \to 0^+$):

$\lim\limits_{x \to 0^+} 0 = 0$

$\lim\limits_{x \to 0^+} x = 0$

Since the lower and upper bounds both approach $0$:

$\lim\limits_{x \to 0^+} \sin x = 0$

For the Left Hand Limit (LHL), we use the property $\sin(-x) = -\sin x$:

$\lim\limits_{x \to 0^-} \sin x = \lim\limits_{h \to 0} \sin(-h) = \lim\limits_{h \to 0} (-\sin h) = 0$

As LHL = RHL = 0, we conclude that $\lim\limits_{x \to 0} \sin x = 0$.

2. Limit of the Cosine Function: $\lim\limits_{x \to 0} \cos x = 1$

We can prove this result by using the trigonometric half-angle identity.

Proof:

Recall the identity for $\cos x$ in terms of sine:

$\cos x = 1 - 2\sin^2\left(\frac{x}{2}\right)$

Applying the limit on both sides as $x \to 0$:

$\lim\limits_{x \to 0} \cos x = \lim\limits_{x \to 0} \left[ 1 - 2\sin^2\left(\frac{x}{2}\right) \right]$

Using the Algebra of Limits (Sum Rule):

$= \lim\limits_{x \to 0} (1) - 2 \cdot \lim\limits_{x \to 0} \left[ \sin^2\left(\frac{x}{2}\right) \right]$

As $x \to 0$, then $\frac{x}{2} \to 0$. We already proved that the limit of sine as its argument goes to zero is $0$:

$= 1 - 2 \cdot (0)^2 = 1 - 0$

$= 1$

Thus, $\lim\limits_{x \to 0} \cos x = 1$. This shows that for very small angles, the cosine value is almost equal to the radius of the unit circle.

3. Limit of the Tangent Function: $\lim\limits_{x \to 0} \tan x = 0$

This follows directly from the Quotient Rule of limits.

Proof:

We know that $\tan x = \frac{\sin x}{\cos x}$.

$\lim\limits_{x \to 0} \tan x = \lim\limits_{x \to 0} \left( \frac{\sin x}{\cos x} \right)$

Since $\lim\limits_{x \to 0} \cos x = 1$ (which is non-zero), we can apply the quotient rule:

$= \frac{\lim\limits_{x \to 0} \sin x}{\lim\limits_{x \to 0} \cos x}$

Substituting the results obtained from Theorem 3(i) and 3(ii):

$= \frac{0}{1} = 0$

Thus, $\lim\limits_{x \to 0} \tan x = 0$.


Visualizing Small Angle Behavior

To understand these limits intuitively, consider the following table which shows how values behave as $x$ (in radians) gets closer to $0$. This is often used in Physics as the Small Angle Approximation.

$x$ (radians) $\sin x$ $\cos x$ $\tan x$
$0.1$ $0.09983$ $0.99500$ $0.10033$
$0.01$ $0.00999$ $0.99995$ $0.01000$
$0.001$ $0.00099$ $0.99999$ $0.00100$
$\to 0$ $\to 0$ $\to 1$ $\to 0$

Note: For very small $x$, we observe that $\sin x \approx x$ and $\tan x \approx x$.


Example. Evaluate: $\lim\limits_{x \to 0} \frac{x + \sin x}{\cos x}$

Answer:

Using the Algebra of Limits:

Limit $= \frac{\lim\limits_{x \to 0} x + \lim\limits_{x \to 0} \sin x}{\lim\limits_{x \to 0} \cos x}$

Applying the standard results $\lim \sin x = 0$ and $\lim \cos x = 1$:

$= \frac{0 + 0}{1}$

$= \frac{0}{1} = 0$

Final Answer: The value of the limit is $0$.


Limits of Trigonometric Functions at a General Point

In our previous discussions, we established the behavior of trigonometric functions as the variable approaches zero. However, in many calculus problems, we need to find the limit as $x$ approaches any real number $c$. This leads us to the concept of continuity. Since sine and cosine functions are continuous over their entire domain (the set of all real numbers), their limit at any point is simply the value of the function at that point.

Limits of Sine and Cosine as $x \to c$

For any real number $c$, the following results are standard:

(i) $\lim\limits_{x \to c} \sin x = \sin c$

(ii) $\lim\limits_{x \to c} \cos x = \cos c$

Detailed Proof

To prove these results, we use the Method of Substitution (Change of Variable) to shift the limit so that it approaches zero, allowing us to use the standard results $\lim\limits_{h \to 0} \sin h = 0$ and $\lim\limits_{h \to 0} \cos h = 1$.

1. Proof for the Sine Function

Step 1: Substitution

Let $x = c + h$. As the value of $x$ approaches $c$, the value of $h$ (the difference) must approach $0$.

As $x \to c, \ h \to 0$

Step 2: Expansion

Substituting the new variable into the limit expression:

$\lim\limits_{x \to c} \sin x = \lim\limits_{h \to 0} \sin(c + h)$

Using the trigonometric addition formula $\sin(A + B) = \sin A \cos B + \cos A \sin B$:

$= \lim\limits_{h \to 0} (\sin c \cos h + \cos c \sin h)$

Step 3: Applying Algebra of Limits

Since $\sin c$ and $\cos c$ are constants with respect to the limit variable $h$, we can distribute the limit:

$= \sin c \left( \lim\limits_{h \to 0} \cos h \right) + \cos c \left( \lim\limits_{h \to 0} \sin h \right)$

Using the standard results from Theorem discussed above:

$\lim\limits_{h \to 0} \cos h = 1$ and $\lim\limits_{h \to 0} \sin h = 0$

Step 4: Final Value

$= \sin c \cdot (1) + \cos c \cdot (0)$

$= \sin c$

2. Proof for the Cosine Function

Similarly, for the cosine function, let $x = c + h$ such that $h \to 0$.

$\lim\limits_{x \to c} \cos x = \lim\limits_{h \to 0} \cos(c + h)$

Using the addition formula $\cos(A + B) = \cos A \cos B - \sin A \sin B$:

$= \lim\limits_{h \to 0} (\cos c \cos h - \sin c \sin h)$

$= \cos c \left( \lim\limits_{h \to 0} \cos h \right) - \sin c \left( \lim\limits_{h \to 0} \sin h \right)$

Substituting the known limits for $\cos h$ and $\sin h$:

$= \cos c \cdot (1) - \sin c \cdot (0)$

$= \cos c$

Significance and Generalization

The fact that $\lim\limits_{x \to c} f(x) = f(c)$ for sine and cosine functions implies that they are Continuous Functions for all real values of $x$. This property is shared by other trigonometric functions within their respective domains:

1. Tangent: $\lim\limits_{x \to c} \tan x = \tan c$ (provided $c \neq (2n+1)\frac{\pi}{2}$)

2. Cosecant: $\lim\limits_{x \to c} \text{cosec} \ x = \text{cosec} \ c$ (provided $c \neq n\pi$)


Example 1. Evaluate $\lim\limits_{x \to \frac{\pi}{4}} (\sin x + \cos x)$. If this represents the value of a physical quantity in Indian units, what is its numerical value?

Answer:

Using the Sum Rule of limits and Theorem above:

Limit $= \lim\limits_{x \to \frac{\pi}{4}} \sin x + \lim\limits_{x \to \frac{\pi}{4}} \cos x$

$= \sin\left(\frac{\pi}{4}\right) + \cos\left(\frac{\pi}{4}\right)$

We know from standard trigonometric values:

$\sin\left(\frac{\pi}{4}\right) = \frac{1}{\sqrt{2}}$

$\cos\left(\frac{\pi}{4}\right) = \frac{1}{\sqrt{2}}$

Therefore:

Limit $= \frac{1}{\sqrt{2}} + \frac{1}{\sqrt{2}} = \frac{2}{\sqrt{2}}$

Limit $= \sqrt{2} \approx 1.414$

Final Answer: The value is $\sqrt{2}$.


Example 2. Prove that $\lim\limits_{x \to \frac{\pi}{2}} \sin x = 1$ using the substitution method.

Answer:

Let: $x = \frac{\pi}{2} + h$. As $x \to \frac{\pi}{2}$, then $h \to 0$.

$\lim\limits_{x \to \frac{\pi}{2}} \sin x = \lim\limits_{h \to 0} \sin\left(\frac{\pi}{2} + h\right)$

Using the identity $\sin\left(\frac{\pi}{2} + \theta\right) = \cos \theta$:

$= \lim\limits_{h \to 0} \cos h$

Applying the standard result $\lim\limits_{h \to 0} \cos h = 1$:

$= 1$

Proof Complete: The limit value is $1$.


Fundamental Trigonometric Limits

The evaluation of trigonometric limits as the variable approaches zero is a cornerstone of Calculus. These limits are considered "fundamental" because they resolve the indeterminate form $\frac{0}{0}$ which frequently appears when finding the derivatives of circular functions. In this section, we provide an exhaustive elaboration of these theorems, their rigorous geometric derivations, and their applications in solving complex problems.

The Primary Trigonometric Limits

For a real variable $x$ measured in radians, the following results are universally accepted as the fundamental limits of trigonometry:

(i) The Sine Limit:

$\lim\limits_{x \to 0} \frac{\sin x}{x} = 1$

(ii) The Tangent Limit:

$\lim\limits_{x \to 0} \frac{\tan x}{x} = 1$

(iii) The Cosine-Difference Limit:

$\lim\limits_{x \to 0} \frac{1 - \cos x}{x} = 0$

(i) Geometric Proof of $\lim\limits_{x \to 0} \frac{\sin x}{x} = 1$

This proof is a fundamental derivation in Calculus, utilizing the Squeeze Theorem (also known as the Sandwich Theorem). It involves comparing the areas of geometric shapes constructed within a unit circle to "trap" the function $\frac{\sin x}{x}$ between two functions that have the same limit at $x = 0$.

Geometric Setup and Construction:

Consider a unit circle (a circle with radius $r = 1$) with its center at the origin $O$. Let $A$ be the point $(1, 0)$ on the circle. Let $P$ be any point on the circle in the first quadrant such that the radian measure of $\angle AOP$ is $x$, where $0 < x < \frac{\pi}{2}$.

1. Construction 1: Draw $MP \perp OA$, where $M$ lies on the radius $OA$.

2. Construction 2: Draw a tangent to the circle at point $A$. Extend the radius $OP$ to intersect this tangent at point $Q$. Since $QA$ is a tangent at the point of residency $A$, $\angle OAQ = 90^\circ$.

Geometric construction for Squeeze Theorem proof

Derivation of Trigonometric Lengths:

In the right-angled triangle $\Delta OMP$:

$\sin x = \frac{MP}{OP} = \frac{MP}{1}$

[$\because$ Radius $OP = 1$]

$MP = \sin x$

... (i)

In the right-angled triangle $\Delta OAQ$:

$\tan x = \frac{AQ}{OA} = \frac{AQ}{1}$

[$\because$ Radius $OA = 1$]

$AQ = \tan x$

... (ii)

Comparison of Areas:

From the visual representation in the unit circle, we can observe the following relationship between the areas of the internal triangle, the circular sector, and the external triangle:

Area of $\Delta OAP$ < Area of Sector $OAP$ < Area of $\Delta OAQ$

Using the standard formulas for area:

$\bullet$ Area of triangle = $\frac{1}{2} \times \text{base} \times \text{height}$

$\bullet$ Area of circular sector = $\frac{1}{2} r^2 x$ (where $x$ is in radians)

Substituting the values:

$\frac{1}{2} \cdot OA \cdot MP < \frac{1}{2} \cdot (OA)^2 \cdot x < \frac{1}{2} \cdot OA \cdot AQ$

Plugging in $OA = 1$, $MP = \sin x$, and $AQ = \tan x$:

$\frac{1}{2} \cdot 1 \cdot \sin x < \frac{1}{2} \cdot 1^2 \cdot x < \frac{1}{2} \cdot 1 \cdot \tan x$

Multiplying the entire inequality by 2:

$\sin x < x < \tan x$

... (iii)

Algebraic Manipulation:

Since we are considering $0 < x < \frac{\pi}{2}$, $\sin x$ is always positive. We divide the inequality (iii) by $\sin x$:

$\frac{\sin x}{\sin x} < \frac{x}{\sin x} < \frac{\tan x}{\sin x}$

$1 < \frac{x}{\sin x} < \frac{1}{\cos x}$

[$\because \tan x = \frac{\sin x}{\cos x}$]

To find the limit of $\frac{\sin x}{x}$, we take the reciprocal of the terms. Recall that taking the reciprocal of positive terms reverses the inequality signs:

$\frac{1}{1} > \frac{\sin x}{x} > \frac{1}{(1/\cos x)}$

$1 > \frac{\sin x}{x} > \cos x$

Rearranging the terms for better readability:

$\cos x < \frac{\sin x}{x} < 1$

... (iv)

Applying the Squeeze Principle:

The Squeeze Principle states that if $f(x) \leq g(x) \leq h(x)$ and $\lim\limits_{x \to c} f(x) = \lim\limits_{x \to c} h(x) = L$, then $\lim\limits_{x \to c} g(x) = L$.

Let us evaluate the limits of the outer functions in inequality (iv) as $x \to 0$:

$\lim\limits_{x \to 0} \cos x = \cos(0) = 1$

[Lower bound limit]

$\lim\limits_{x \to 0} 1 = 1$

[Upper bound limit]

Since both the lower and upper bounds approach $1$, the middle function must also approach $1$.

Conclusion:

$\therefore \mathbf{\lim\limits_{x \to 0} \frac{\sin x}{x} = 1}$

Derivation of Other Standard Results

Beyond the fundamental $\lim\limits_{x \to 0} \frac{\sin x}{x} = 1$, several other results are frequently used in calculus. These are derived using the Algebra of Limits and basic Trigonometric Identities.

(ii) Proof for $\lim\limits_{x \to 0} \frac{\tan x}{x} = 1$:

To prove this, we express $\tan x$ in terms of $\sin x$ and $\cos x$.

$\lim\limits_{x \to 0} \frac{\tan x}{x} = \lim\limits_{x \to 0} \left( \frac{\sin x}{x} \cdot \frac{1}{\cos x} \right)$

$\left[ \because \tan x = \frac{\sin x}{\cos x} \right]$

Applying the product rule of limits:

$ = \left( \lim\limits_{x \to 0} \frac{\sin x}{x} \right) \cdot \left( \lim\limits_{x \to 0} \frac{1}{\cos x} \right)$

$ = 1 \cdot \frac{1}{1} = 1$

$\left[ \because \lim\limits_{x \to 0} \frac{\sin x}{x} = 1 \text{ and } \lim\limits_{x \to 0} \cos x = 1 \right]$

$\therefore \mathbf{\lim\limits_{x \to 0} \frac{\tan x}{x} = 1}$

(iii) Proof for $\lim\limits_{x \to 0} \frac{1 - \cos x}{x} = 0$:

We use the trigonometric half-angle identity: $1 - \cos x = 2 \sin^2\left(\frac{x}{2}\right)$.

$\lim\limits_{x \to 0} \frac{1 - \cos x}{x} = \lim\limits_{x \to 0} \frac{2 \sin^2(\frac{x}{2})}{x}$

Rearranging the terms to use the standard limit $\lim\limits_{\theta \to 0} \frac{\sin \theta}{\theta} = 1$:

$ = \lim\limits_{x \to 0} \left[ \frac{\sin(\frac{x}{2})}{\frac{x}{2}} \cdot \sin(\frac{x}{2}) \right]$

$ = \left( \lim\limits_{\frac{x}{2} \to 0} \frac{\sin(\frac{x}{2})}{\frac{x}{2}} \right) \cdot \left( \lim\limits_{x \to 0} \sin(\frac{x}{2}) \right)$

$\text{As } x \to 0 \implies \frac{x}{2} \to \frac{0}{2}$

$ = 1 \cdot \sin(0) = 1 \cdot 0 = 0$

$\therefore \mathbf{\lim\limits_{x \to 0} \frac{1 - \cos x}{x} = 0}$


Example 1. Evaluate $\lim\limits_{x \to 0} \frac{\sin(2x) + \tan(3x)}{x}$.

Answer:

Using the Algebra of Limits (Sum Rule), we split the expression:

Limit $= \lim\limits_{x \to 0} \frac{\sin(2x)}{x} + \lim\limits_{x \to 0} \frac{\tan(3x)}{x}$

To apply standard theorems, adjust the denominators:

$ = 2 \left( \lim\limits_{x \to 0} \frac{\sin(2x)}{2x} \right) + 3 \left( \lim\limits_{x \to 0} \frac{\tan(3x)}{3x} \right)$

$ = 2(1) + 3(1) = 5$

Final Answer: The value of the limit is $5$.




Limits of Exponential and Logarithmic Functions

To evaluate limits involving exponential and logarithmic functions, we use several standard results. These results are fundamental in Calculus and are stated here for direct application in problem-solving.

In these formulas, $e$ represents Euler's number (approximately $2.71828$), and $\log$ without a base typically refers to the natural logarithm ($\log_e$ or $\ln$).

Significance and Relevance of Euler's Number ($e$)

Euler's number, denoted by the letter $e$, is a mathematical constant approximately equal to $2.71828$. It is an irrational number, meaning its decimal expansion is infinite and non-repeating. It serves as the base of natural logarithms and is fundamental in describing growth and decay processes in nature, finance, and engineering.

1. Definition through Limits

In the context of limits, $e$ is defined as the value that the expression $(1 + 1/n)^n$ approaches as $n$ becomes infinitely large.

$e = \lim\limits_{n \to \infty} \left( 1 + \frac{1}{n} \right)^n$

By substituting $x = 1/n$, we get the form more commonly used in calculus:

$e = \lim\limits_{x \to 0} (1 + x)^{1/x}$

2. Relevance in Finance (Indian Banking Perspective)

In India, the concept of Continuous Compounding is a major application of Euler's number. While most Indian banks compound interest annually, half-yearly, or quarterly, theoretical models for "continuous" growth use $e$.

If a principal amount $\textsf{₹} P$ is invested at an annual interest rate $r$ (in decimals) compounded continuously for $t$ years, the final amount $A$ is calculated using the formula:

$A = P \cdot e^{rt}$

[Continuous Compounding Formula]

3. Euler's Identity

Euler's number connects the five most important constants in mathematics in a single equation, known as Euler's Identity, which is often cited as the most beautiful equation in math:

$e^{i\pi} + 1 = 0$

[Where $i = \sqrt{-1}$]

Comparison with Common Base 10

Students must distinguish between the Natural Logarithm (Base $e$) and the Common Logarithm (Base $10$).

Property Natural Log ($\ln x$) Common Log ($\log_{10} x$)
Base $e \approx 2.718$ $10$
Standard usage Calculus & Theoretical Science Numerical Calculation & Engineering
Conversion $\ln x = 2.303 \log_{10} x$ $\log_{10} x = \frac{\ln x}{2.303}$

Standard Results (Group A: Direct Substitution)

The following limits are obtained through direct substitution because exponential and logarithmic functions are continuous within their domains:

1. $\lim\limits_{x \to 0} e^x = 1$

... (i)

2. $\lim\limits_{x \to c} e^x = e^c$

... (ii)

3. $\lim\limits_{x \to 0} a^x = 1$

[$a > 0, a \neq 1$]           ... (iii)

4. $\lim\limits_{x \to c} a^x = a^c$

[$a > 0, a \neq 1$]           ... (iv)

5. $\lim\limits_{x \to c} \log x = \log c$

[$c > 0$]           ... (v)


Standard Results (Group B: Indeterminate Forms)

The following results are crucial as they deal with limits that initially appear as indeterminate forms like $1^\infty$ or $0/0$:

6. $\lim\limits_{x \to 0} (1 + x)^{1/x} = e$

... (vi)

7. $\lim\limits_{x \to 0} \frac{\log(1 + x)}{x} = 1$

... (vii)

8. $\lim\limits_{x \to 0} \frac{e^x - 1}{x} = 1$

... (viii)

9. $\lim\limits_{x \to 0} \frac{a^x - 1}{x} = \log a$

[$a > 0, a \neq 1$]           ... (ix)


Elaboration on Direct Substitution in Exponential and Logarithmic Limits

In Calculus, the most straightforward method to evaluate a limit is Direct Substitution. This method is applicable when a function is continuous at the point $x = c$. Exponential functions ($e^x, a^x$) and logarithmic functions ($\log x$) are continuous over their respective domains.

Therefore, if $f(x)$ is one of these functions, then:

$\lim\limits_{x \to c} f(x) = f(c)$

[Condition for Continuity]

1. Limits of the Natural Exponential Function ($e^x$)

The function $f(x) = e^x$ is defined for all real numbers. Since it is a continuous function, the limit as $x$ approaches any value $c$ is simply the value of the function at $c$.

Case (i): Limit as $x$ approaches $0$

When we substitute $x = 0$ into the exponential function:

$\lim\limits_{x \to 0} e^x = e^0$

$e^0 = 1$

[$\because$ Any non-zero base to the power 0 is 1]

Case (ii): Limit as $x$ approaches a constant $c$

For any real number $c$:

$\lim\limits_{x \to c} e^x = e^c$

... (ii)

For example, if we need to find the growth factor at $x=2$, we evaluate $\lim\limits_{x \to 2} e^x = e^2 \approx 7.389$.

2. Limits of the General Exponential Function ($a^x$)

The general exponential function $a^x$ requires the base $a$ to be positive ($a > 0$) and usually $a \neq 1$. Like $e^x$, it is continuous across its domain.

Case (iii): Limit as $x$ approaches $0$

$\lim\limits_{x \to 0} a^x = a^0$

$a^0 = 1$

[Given $a > 0$]

Case (iv): Limit as $x$ approaches a constant $c$

$\lim\limits_{x \to c} a^x = a^c$

Example: In financial mathematics, if an investment grows at a rate $a$ per year, the growth after $c$ years is calculated by this limit.

3. Limits of the Logarithmic Function ($\log x$)

The logarithmic function $f(x) = \log x$ (with base $e$ or $a$) is continuous for all $x > 0$. It is not defined for zero or negative numbers.

Case (v): Limit as $x$ approaches a positive constant $c$

As long as $c$ is within the domain ($c > 0$), we use direct substitution:

$\lim\limits_{x \to c} \log x = \log c$

Important Note on the Domain:

If we try to find $\lim\limits_{x \to 0} \log x$, direct substitution fails because $\log(0)$ is undefined. Geometrically, as $x$ approaches $0$ from the right, the value of $\log x$ approaches negative infinity ($-\infty$).


Summary Table of Direct Substitution Results

Function Classification Limit Point Standard Result
Natural Exponential $x \to 0$ $\lim\limits_{x \to 0} e^x = 1$
Natural Exponential $x \to c$ $\lim\limits_{x \to c} e^x = e^c$
General Exponential $x \to 0$ $\lim\limits_{x \to 0} a^x = 1$
General Exponential $x \to c$ $\lim\limits_{x \to c} a^x = a^c$
Logarithmic $x \to c$ ($c > 0$) $\lim\limits_{x \to c} \log x = \log c$

Example 1. Evaluate $\lim\limits_{x \to 2} (3^x + e^x)$.

Answer:

Using the Sum Rule of Limits:

$\text{Limit} = \lim\limits_{x \to 2} 3^x + \lim\limits_{x \to 2} e^x$

By direct substitution:

$ = 3^2 + e^2$

$ = 9 + e^2$


Elaboration on Indeterminate Forms in Exponential and Logarithmic Limits

The standard results in this group are designed to handle limits that, upon direct substitution, result in indeterminate forms such as $1^{\infty}$ or $\frac{0}{0}$. These forms do not have a defined value and require algebraic manipulation or specific theorems to evaluate.

4. The Limit $\lim\limits_{x \to 0} (1 + x)^{\frac{1}{x}} = e$

This limit is often considered the formal definition of Euler's number ($e$). It represents the $1^{\infty}$ indeterminate form.

Key Observation:

For this result to be valid, the term added to $1$ (which is $x$) and the exponent (which is $\frac{1}{x}$) must be reciprocals of each other. As $x$ approaches $0$, the base approaches $1$ and the power approaches infinity.

$\lim\limits_{x \to 0} (1 + x)^{\frac{1}{x}} = e$

Alternate Form: By substituting $x = \frac{1}{n}$, as $x \to 0$, $n \to \infty$:

$\lim\limits_{n \to \infty} \left( 1 + \frac{1}{n} \right)^n = e$

5. The Limit $\lim\limits_{x \to 0} \frac{\log(1 + x)}{x} = 1$

This limit is a fundamental result used to evaluate logarithmic expressions that result in the $\frac{0}{0}$ indeterminate form. When we substitute $x = 0$ directly into the expression $\frac{\log(1+x)}{x}$, we get $\frac{\log(1)}{0} = \frac{0}{0}$. To resolve this, we rely on the properties of logarithms and the definition of Euler's number ($e$).

Prerequisites and Properties Used:

Before proceeding with the derivation, we must recall two critical mathematical rules:

1. Power Rule of Logarithms: $n \cdot \log_a(m) = \log_a(m^n)$.

2. Definition of $e$: $\lim\limits_{x \to 0} (1 + x)^{\frac{1}{x}} = e$.

3. Continuity: Since the logarithm function is continuous for positive values, the limit operator can be moved inside the logarithm: $\lim \log(f(x)) = \log(\lim f(x))$.

Step-by-Step Derivation

Let us consider the limit expression:

$L = \lim\limits_{x \to 0} \frac{\log(1 + x)}{x}$

Step 1: We rewrite the fraction as a product of a reciprocal and a logarithm.

$L = \lim\limits_{x \to 0} \left[ \frac{1}{x} \cdot \log(1 + x) \right]$

Step 2: Applying the power rule of logarithms, we move the coefficient $\frac{1}{x}$ to the exponent of the argument $(1+x)$.

$L = \lim\limits_{x \to 0} \log(1 + x)^{\frac{1}{x}}$

[$\because n \log m = \log m^n$]

Step 3: Using the property of continuity for logarithmic functions, we move the limit inside the logarithm.

$L = \log \left[ \lim\limits_{x \to 0} (1 + x)^{\frac{1}{x}} \right]$

Step 4: Substitute the standard limit result $\lim\limits_{x \to 0} (1 + x)^{\frac{1}{x}} = e$ into the bracket.

$L = \log_e (e)$

[$\because \lim\limits_{x \to 0} (1+x)^{\frac{1}{x}} = e$]

Step 5: Since the base of the natural logarithm is $e$, the value of $\log_e e$ is $1$.

$L = 1$

Conclusion:

$\therefore \mathbf{\lim\limits_{x \to 0} \frac{\log(1 + x)}{x} = 1}$

6. The Limit $\lim\limits_{x \to 0} \frac{e^x - 1}{x} = 1$

The limit $\lim\limits_{x \to 0} \frac{e^x - 1}{x}$ is one of the most vital results in calculus, especially when finding the derivative of exponential functions. When we apply direct substitution $x = 0$:

$\frac{e^0 - 1}{0} = \frac{1 - 1}{0} = \frac{0}{0}$

[Indeterminate Form]

To resolve this $\frac{0}{0}$ form, we use a substitution method that links this exponential limit back to a known logarithmic limit.

Step-by-Step Derivation

I. Substitution and Change of Variable:

Let us assume a new variable $y$ such that:

$y = e^x - 1$

... (i)

Now, we rearrange equation (i) to express $x$ in terms of $y$:

$e^x = 1 + y$

Taking Natural Logarithm ($\log_e$) on both sides:

$x = \log_e(1 + y)$

... (ii)

II. Changing the Limit:

As the original variable $x$ approaches $0$, we must determine what happens to $y$:

$\text{As } x \to 0, y = e^0 - 1 = 1 - 1 = 0$

$\therefore \text{As } x \to 0 \implies y \to 0$

III. Substituting into the Limit Expression:

Substituting the values from (i) and (ii) into the original limit:

$\lim\limits_{x \to 0} \frac{e^x - 1}{x} = \lim\limits_{y \to 0} \frac{y}{\log_e(1 + y)}$

We can rewrite the fraction by moving $y$ to the denominator of the denominator:

$ = \lim\limits_{y \to 0} \frac{1}{\frac{\log_e(1 + y)}{y}}$

Using the previously proven logarithmic limit result $\lim\limits_{y \to 0} \frac{\log_e(1 + y)}{y} = 1$:

$ = \frac{1}{1} = 1$

[Using $\lim\limits_{y \to 0} \frac{\log_e(1 + y)}{y} = 1$]

$\therefore \mathbf{\lim\limits_{x \to 0} \frac{e^x - 1}{x} = 1}$

7. The Limit $\lim\limits_{x \to 0} \frac{a^x - 1}{x} = \log a$

The limit $\lim\limits_{x \to 0} \frac{a^x - 1}{x}$ generalizes the natural exponential limit for any positive real base $a$ where $a \neq 1$. While the limit for base $e$ results in $1$, the limit for a general base $a$ results in the natural logarithm of that base.

To prove this, we transform the general exponential expression into a natural exponential expression using a standard logarithmic identity.

Given Identity:

Any positive number $a$ raised to a power $x$ can be written as an exponent of $e$ using the rule:

$a^x = e^{\log_e(a^x)} = e^{x \log_e a}$

[Property of Logs]           ... (i)

Step-by-Step Derivation:

Step 1: Substitute the identity from (i) into the original limit expression.

$\lim\limits_{x \to 0} \frac{a^x - 1}{x} = \lim\limits_{x \to 0} \frac{e^{x \log_e a} - 1}{x}$

Step 2: To apply the standard theorem $\lim\limits_{\theta \to 0} \frac{e^\theta - 1}{\theta} = 1$, the expression in the exponent must exactly match the denominator. Here, the exponent is $x \log_e a$, but the denominator is only $x$.

We multiply and divide the denominator by the constant $\log_e a$:

$ = \lim\limits_{x \to 0} \left[ \frac{e^{x \log_e a} - 1}{x \cdot \log_e a} \cdot \log_e a \right]$

Step 3: Since $\log_e a$ is a constant, we can move it outside the limit operator.

$ = \log_e a \cdot \lim\limits_{x \to 0} \left( \frac{e^{x \log_e a} - 1}{x \log_e a} \right)$

Step 4: Change the limit variable. Let $\theta = x \log_e a$. As $x \to 0$, it follows that $\theta \to 0$.

$ = \log_e a \cdot \lim\limits_{\theta \to 0} \left( \frac{e^\theta - 1}{\theta} \right)$

Step 5: Using the standard result $\lim\limits_{\theta \to 0} \frac{e^\theta - 1}{\theta} = 1$:

$ = \log_e a \cdot (1) = \log_e a$

[Using $\lim\limits_{x \to 0} \frac{e^x-1}{x} = 1$]

$\therefore \mathbf{\lim\limits_{x \to 0} \frac{a^x - 1}{x} = \log_e a}$


Summary Table of Indeterminate Exponential and Logarithmic Limits

Limit Type Mathematical Expression Indeterminate Form Standard Result
Definition of Euler's Number $\lim\limits_{x \to 0} (1 + x)^{\frac{1}{x}}$ $1^{\infty}$ $e$
Natural Logarithmic Limit $\lim\limits_{x \to 0} \frac{\log_e(1 + x)}{x}$ $\frac{0}{0}$ $1$
Exponential Limit (Base $e$) $\lim\limits_{x \to 0} \frac{e^x - 1}{x}$ $\frac{0}{0}$ $1$
General Exponential Limit $\lim\limits_{x \to 0} \frac{a^x - 1}{x}$ $\frac{0}{0}$ $\log_e a$
Alternate $e$ definition $\lim\limits_{n \to \infty} \left( 1 + \frac{1}{n} \right)^n$ $1^{\infty}$ $e$

Example 2. Evaluate $\lim\limits_{x \to 0} \frac{e^{3x} - 1}{\sin 2x}$.

Answer:

We divide both numerator and denominator by $x$ to use standard results:

$\text{Limit} = \frac{\lim\limits_{x \to 0} \frac{e^{3x} - 1}{x}}{\lim\limits_{x \to 0} \frac{\sin 2x}{x}}$

Adjusting coefficients for the numerator and denominator:

$ = \frac{3 \cdot \lim\limits_{3x \to 0} \frac{e^{3x} - 1}{3x}}{2 \cdot \lim\limits_{2x \to 0} \frac{\sin 2x}{2x}}$

$ = \frac{3(1)}{2(1)} = \frac{3}{2}$

Final Answer: $1.5$

Example 3. Evaluate $\lim\limits_{x \to 0} \frac{\log(1 + 5x)}{3x}$.

Answer:

We need the denominator to match $5x$. We multiply and divide by $5/3$:

$\text{Limit} = \frac{5}{3} \cdot \lim\limits_{5x \to 0} \frac{\log(1+5x)}{5x}$

$ = \frac{5}{3} \cdot (1) = \frac{5}{3}$

Final Answer: $\frac{5}{3}$



Derivatives

The Concept of Instantaneous Rate of Change

The derivative is not merely a mathematical formula; it represents the actual rate of change of a dependent variable with respect to an independent variable at a specific moment. In physics, this is most clearly understood through the concept of velocity.

To understand this intuitively, let us consider an object dropped from a high tower (like the Qutub Minar in Delhi). Scientific observation shows that the distance $s$ (in metres) covered in time $t$ (in seconds) follows the law of motion: $s = 4.9t^2$.

Geometric and Physical Setup:

Imagine a body falling vertically from a point $O$ (the top of the tower). We track its position at two very close moments in time.

Diagram showing a falling body from point O to P and then Q

1. At time $t$, the body is at point $P$. The distance $OP = f(t)$.

2. After a very small time interval $h$, at time $t + h$, the body is at point $Q$. The distance $OQ = f(t + h)$.

Mathematical Derivation from First Principles

The distance covered during the interval $h$ is the difference between the two positions.

Distance $PQ = f(t + h) - f(t)$

The Average Velocity over this interval $h$ is calculated as:

$\text{Avg Velocity} = \frac{\text{Change in Distance}}{\text{Change in Time}}$

$\text{Avg Velocity} = \frac{f(t + h) - f(t)}{h}$

If we make the interval $h$ smaller and smaller (tending to zero), the average velocity becomes the Instantaneous Velocity. This limiting value is the Derivative of the function.

$f'(t) = \lim\limits_{h \to 0} \frac{f(t + h) - f(t)}{h}$

[Definition of Derivative]

Calculation for $f(t) = 4.9t^2$:

Let's derive the general formula for the rate of distance (velocity) at any time $t$:

$f(t) = 4.9t^2$

$f(t + h) = 4.9(t + h)^2$

Expanding the term $(t + h)^2$:

$f(t + h) = 4.9(t^2 + 2th + h^2)$

Now, substituting into the derivative formula:

$f'(t) = \lim\limits_{h \to 0} \frac{4.9(t^2 + 2th + h^2) - 4.9t^2}{h}$

$f'(t) = \lim\limits_{h \to 0} \frac{4.9(2th + h^2)}{h}$

[Cancelling $4.9t^2$]

$f'(t) = \lim\limits_{h \to 0} 4.9(2t + h)$

[Dividing by $h$]

$f'(t) = 4.9(2t + 0) = 9.8t$

Instantaneous Velocity at Different Time Intervals

Using our derived formula $v = 9.8t$, let's calculate the velocity of the body at different moments in its fall.

Case 1: At $t = 3$ seconds

$v = 9.8 \times 3$

$v = 29.4 \text{ m/sec}$

Case 2: At $t = 5$ seconds

$v = 9.8 \times 5$

$v = 49.0 \text{ m/sec}$

Comparison Table: Average vs Instantaneous Velocity

The following table illustrates how the average velocity approaches the instantaneous velocity as the time interval $h$ decreases, for $t = 2$ seconds.

Time Interval ($h$) Average Velocity $\frac{f(2+h)-f(2)}{h}$ Instantaneous Velocity ($f'(2)$)
$1.0 \text{ sec}$ $14.7 \text{ m/sec}$ $19.6 \text{ m/sec}$
$0.1 \text{ sec}$ $19.11 \text{ m/sec}$ $19.6 \text{ m/sec}$
$0.01 \text{ sec}$ $19.551 \text{ m/sec}$ $19.6 \text{ m/sec}$
$0.001 \text{ sec}$ $19.595 \text{ m/sec}$ $19.6 \text{ m/sec}$
Limit ($h \to 0$) $19.6 \text{ m/sec}$ $19.6 \text{ m/sec}$

NOTE: Since the body is falling vertically, there is no change in direction. Consequently, the distance covered and the magnitude of displacement are equal, making the speed and velocity numerically identical in this experiment.


Study of Derivative at a Given Point

The derivative of a function at a specific point $x = c$ is a measure of how the function value changes as its input changes slightly around that point. Geometrically, it represents the slope of the unique tangent line to the graph of the function at that point. If such a unique tangent exists and is not vertical, the function is said to be differentiable at that point.

Geometrical Interpretation

Consider the graph of a real-valued function $y = f(x)$. Let $P(c, f(c))$ be a fixed point on the curve. To determine the slope of the tangent at $P$, we pick a neighbouring point $Q(c + h, f(c + h))$ on the curve, where $h$ is a small non-zero increment in $x$.

Geometric representation of secant line PQ becoming a tangent at P

The line passing through $P$ and $Q$ is called the secant line. The slope of this secant line ($m_{sec}$) is calculated using the coordinate geometry formula $\frac{y_2 - y_1}{x_2 - x_1}$:

$m_{sec} = \frac{f(c + h) - f(c)}{(c + h) - c}$

[Using slope formula]

$m_{sec} = \frac{f(c + h) - f(c)}{h}$

As the point $Q$ approaches $P$ along the curve, the distance $h$ between their $x$-coordinates approaches zero ($h \to 0$). The secant line $PQ$ rotates about $P$ and, in the limiting position, becomes the tangent at $P$. The slope of this tangent is the derivative $f'(c)$.

$f'(c) = \lim\limits_{h \to 0} \frac{f(c+h) - f(c)}{h}$

Analytical Condition for Differentiability

The differentiability of a function at a point is a rigorous analytical property. While continuity ensures there are no breaks in the graph, differentiability ensures the graph is "smooth" enough to have a unique, non-vertical tangent line at that point. To determine this mathematically, we examine the limit of the incremental ratio from both sides of the point.

Detailed Study of One-Sided Derivatives

Let $f(x)$ be a real-valued function defined on an open interval $(a, b)$ and let $c \in (a, b)$. The existence of the derivative depends on the convergence of the left-hand and right-hand limits of the slope of the secant line.

1. Left Hand Derivative (LHD)

The Left Hand Derivative measures the instantaneous rate of change as the independent variable $x$ approaches $c$ from the left side ($x < c$). In this case, we use a small positive increment $h$ and subtract it from $c$.

$L f'(c) = \lim\limits_{h \to 0^+} \frac{f(c - h) - f(c)}{-h}$

... (i)

Alternatively, in terms of $x$:

$L f'(c) = \lim\limits_{x \to c^-} \frac{f(x) - f(c)}{x - c}$

[Limit from the left]

2. Right Hand Derivative (RHD)

The Right Hand Derivative measures the instantaneous rate of change as $x$ approaches $c$ from the right side ($x > c$). Here, we add a small positive increment $h$ to $c$.

$R f'(c) = \lim\limits_{h \to 0^+} \frac{f(c + h) - f(c)}{h}$

... (ii)

Alternatively, in terms of $x$:

$R f'(c) = \lim\limits_{x \to c^+} \frac{f(x) - f(c)}{x - c}$

[Limit from the right]

Criteria for Existence of Derivative

For a function to be differentiable at $x = c$, the following conditions must be satisfied simultaneously:

Condition I: Finiteness
Both $L f'(c)$ and $R f'(c)$ must exist as finite real numbers. If either limit results in $\infty$ or $-\infty$, the function is not differentiable at that point, even if the graph appears continuous (this often corresponds to a vertical tangent).

Condition II: Equality
The values of the left and right limits must be exactly equal.

$L f'(c) = R f'(c)$

[Existence Condition]

If $L f'(c) \neq R f'(c)$, we say the function has a kink or a sharp corner at $x = c$. At such a point, one can draw infinitely many lines that "touch" the point, but none of them qualify as a unique tangent.

Summary Table: LHD vs RHD

Property Left Hand Derivative (LHD) Right Hand Derivative (RHD)
Direction Approaching from $x < c$ Approaching from $x > c$
Notation $f'_{-}(c)$ or $L f'(c)$ $f'_{+}(c)$ or $R f'(c)$
Formula ($h \to 0^+$) $\frac{f(c-h) - f(c)}{-h}$ $\frac{f(c+h) - f(c)}{h}$
Geometric Meaning Slope of tangent from the left Slope of tangent from the right

Comparison of Rates

The following table distinguishes between average and instantaneous rates of change.

Feature Average Rate of Change Instantaneous Rate of Change
Definition Change over a finite interval $[x, x + \Delta x]$ Change at a specific point $x$
Formula $\frac{f(x + \Delta x) - f(x)}{\Delta x}$ $\lim\limits_{h \to 0} \frac{f(x + h) - f(x)}{h}$
Geometric Meaning Slope of the Secant line Slope of the Tangent line
Physical Example Average Velocity of a car Speedometer reading (Instantaneous Speed)

Physical Significance: Derivative as a Rate Measure

In the physical world, quantities are rarely static; they change in relation to one another. The derivative provides a precise mathematical tool to measure the instantaneous rate of change of a dependent variable with respect to an independent variable. This is distinct from the average rate, as it provides the rate at a specific, frozen moment in time or space.

Theory of Increments

Consider a functional relationship where $y$ depends on $x$, represented as $y = f(x)$. If the independent variable $x$ undergoes a small change, say $\delta x$ (read as delta x), then the dependent variable $y$ will also undergo a corresponding change $\delta y$.

Mathematical Derivation:

$y + \delta y = f(x + \delta x)$

[New value of function]

Subtracting the original function $y = f(x)$ from equation provided above, we get the increment in $y$:

$\delta y = f(x + \delta x) - f(x)$

[Change in dependent variable]

The average rate of change of $y$ per unit change in $x$ over the interval $[x, x + \delta x]$ is given by the incremental ratio:

$\frac{\delta y}{\delta x} = \frac{f(x + \delta x) - f(x)}{\delta x}$

To find the actual (instantaneous) rate of change, we take the limit as the increment $\delta x$ approaches zero. This limiting value is the derivative $\frac{dy}{dx}$:

$\frac{dy}{dx} = \lim\limits_{\delta x \to 0} \frac{\delta y}{\delta x}$

[Derivative as Rate Measure]

Applications of Derivatives as a Rate Measure

The derivative $\frac{dy}{dx}$ is defined as the instantaneous rate of change of $y$ with respect to $x$. This concept is applied to solve real-world problems involving economics, geometry, and physical motion.

I. Economics and Commerce: Marginal Analysis

In commerce and business economics, the term "Marginal" refers to the rate of change of a total quantity with respect to the number of units produced or sold. This is essential for Indian entrepreneurs and accountants to determine profit-maximizing levels of production.

1. Marginal Cost (MC)

If $C(x)$ is the total cost of producing $x$ units of a product, the marginal cost represents the additional cost incurred by producing one more unit at a specific production level $x$.

$MC = \frac{d}{dx} [C(x)]$

[Instantaneous change in $\textsf{₹}$]

2. Marginal Revenue (MR)

If $R(x)$ is the total revenue received from the sale of $x$ units, the marginal revenue is the rate of change of revenue with respect to the number of units sold.

$MR = \frac{d}{dx} [R(x)]$

[Revenue change in $\textsf{₹}$ per unit]

Note: Profit $P(x)$ is given by $R(x) - C(x)$. Therefore, Marginal Profit $MP = \frac{dP}{dx} = MR - MC$.

II. Mensuration and Geometry: Dimensional Changes

In geometry, derivatives are used to find how quickly the area, volume, or surface area of an object changes as its dimensions (like radius, side, or height) vary.

Standard Rate Formulas:

Shape Quantity Derivative (Rate Measure)
Circle Area $A = \pi r^2$ $\frac{dA}{dr} = 2\pi r$ (Circumference)
Sphere Volume $V = \frac{4}{3}\pi r^3$ $\frac{dV}{dr} = 4\pi r^2$ (Surface Area)
Cube Volume $V = x^3$ $\frac{dV}{dx} = 3x^2$
Cylinder Volume $V = \pi r^2 h$ $\frac{dV}{dt} = \pi \left( r^2 \frac{dh}{dt} + 2rh \frac{dr}{dt} \right)$

When these quantities change with respect to time ($t$), we apply the Chain Rule:

$\frac{dA}{dt} = \frac{dA}{dr} \cdot \frac{dr}{dt}$

[Rate of change of Area wrt time]

III. Physics: Kinematics and Motion

In the Indian science stream (Class 11 Physics), derivatives form the backbone of Calculus-based Kinematics. They allow us to move from displacement to velocity and then to acceleration.

1. Velocity ($v$)

Velocity is the rate of change of displacement ($s$) with respect to time ($t$).

$v = \frac{ds}{dt} = \lim\limits_{\Delta t \to 0} \frac{\Delta s}{\Delta t}$

[Measured in $m/s$]

2. Acceleration ($a$)

Acceleration is the rate of change of velocity ($v$) with respect to time ($t$).

$a = \frac{dv}{dt} = \frac{d^2s}{dt^2}$

[Measured in $m/s^2$]

Acceleration can also be expressed in terms of displacement $s$ using the chain rule:

$a = v \frac{dv}{ds}$

[Useful when velocity depends on $s$]


Derivative at Any Point and First Principles

The First Principles method, often referred to as the Ab-Initio method, is the foundational process of determining the derivative of a function. It treats the derivative not as a set of memorized rules, but as a dynamic limit of a ratio.

Conceptual Breakdown of the Formula

To differentiate a function $f(x)$ at any general point $x$, we analyze the behavior of the function over an infinitesimally small interval. Let us break down the expression:

$f'(x) = \lim\limits_{h \to 0} \frac{f(x + h) - f(x)}{h}$

1. The Difference Quotient

The term $\frac{f(x + h) - f(x)}{h}$ is known as the Difference Quotient. It represents the slope of a secant line passing through the points $(x, f(x))$ and $(x+h, f(x+h))$. In physical terms, this is the average rate of change of the function over the distance $h$.

2. The Role of the Limit ($\lim\limits_{h \to 0}$)

By applying the limit as $h$ approaches zero, we shrink the interval until the two points on the curve effectively merge into one. This transforms the secant line into a tangent line at the point $x$. The resulting value is the instantaneous rate of change.

Step-by-Step Procedure for Ab-Initio Differentiation

In exams, solving a problem from "First Principles" requires a systematic four-step approach:

Step I: Write down the given function $f(x)$ and find $f(x + h)$ by replacing $x$ with $(x + h)$.

Step II: Calculate the increment in the function: $\delta y = f(x + h) - f(x)$.

Step III: Form the incremental ratio $\frac{f(x + h) - f(x)}{h}$ and simplify the expression algebraically to eliminate $h$ from the denominator where possible.

Step IV: Evaluate the limit as $h \to 0$ to find $f'(x)$.

Derivation of Standard Formulas

Let us derive the derivative of a basic power function to see this method in action.

Case Study: Derivation of $\frac{d}{dx}(x^n)$ for $n=2$

Given: $f(x) = x^2$

$f(x+h) = (x+h)^2 = x^2 + 2xh + h^2$

Applying the formula:

$f'(x) = \lim\limits_{h \to 0} \frac{(x^2 + 2xh + h^2) - x^2}{h}$

$f'(x) = \lim\limits_{h \to 0} \frac{2xh + h^2}{h}$

$f'(x) = \lim\limits_{h \to 0} (2x + h)$

$f'(x) = 2x$

... (Result)

Standard Notations for Derivatives

When a function is defined as $y = f(x)$, its derivative with respect to the independent variable $x$ is called the differential coefficient.

Notation Name Symbolic Representation
Lagrange's Notation $f'(x)$ or $y'$
Leibniz's Notation $\frac{d}{dx}(f(x))$ or $\frac{dy}{dx}$
Suffix/Subscript Notation $y_1$ or $f_1(x)$
Differential Operator $D f(x)$ or $Dy$

Derivative Evaluated at a Point

If we need to denote the derivative of a function specifically at a point where $x = c$, the following notations are employed:

$f'(c)$

[Functional Notation]

$\left( \frac{dy}{dx} \right)_{x=c}$ or $\frac{dy}{dx} \Big|_{x=c}$

[Leibniz Evaluation]

$\frac{d(f(x))}{dx} \Big|_c$

[Alternative form]



Standard Results on Derivatives

1. Derivative of a Constant Function

The derivative of a constant function is always zero. This is because a constant function represents a horizontal line on a graph, and the slope of a horizontal line is zero at every point.

Given: A function $f(x) = c$, where $c$ is any fixed real number (constant) for all $x \in \mathbb{R}$.

To Prove: $\frac{d}{dx}(c) = 0$

Proof:

By the definition of the first principle of derivatives, we have:

$f'(x) = \lim\limits_{h \to 0} \frac{f(x+h) - f(x)}{h}$

For the given constant function:

$f(x) = c$

(Given constant function)

Since the function value remains the same for any input:

$f(x+h) = c$

(Value remains constant)

Substituting these values into the first principle formula:

$f'(x) = \lim\limits_{h \to 0} \frac{c - c}{h}$

Simplifying the numerator:

$f'(x) = \lim\limits_{h \to 0} \frac{0}{h}$

Since $0$ divided by any non-zero value $h$ (where $h \to 0$ but $h \neq 0$) is exactly $0$:

$f'(x) = \lim\limits_{h \to 0} (0)$

$f'(x) = 0$

Thus, from the above equation, we conclude that:

$\mathbf{\frac{d}{dx}(c) = 0}$

This result holds true for all $x \in \mathbb{R}$.


2. Derivative of the Identity Function

The derivative of the identity function $f(x) = x$ is unity ($1$). Geometrically, the function $f(x) = x$ represents a straight line passing through the origin at an angle of $45^\circ$ to the x-axis, meaning its slope (or derivative) is constant and equal to $1$ at every point.

Given: A function $f(x) = x$.

To Prove: $\frac{d}{dx}(x) = 1$

Proof:

By the definition of the first principle of derivatives, the derivative of a function $f(x)$ is given by:

$f'(x) = \lim\limits_{h \to 0} \frac{f(x+h) - f(x)}{h}$

For the given identity function:

$f(x) = x$

... (i)

Replacing $x$ with $x + h$, we get:

$f(x+h) = x + h$

... (ii)

Now, substituting the values from equations (i) and (ii) into the first principle formula:

$f'(x) = \lim\limits_{h \to 0} \frac{(x + h) - x}{h}$

Simplifying the numerator by subtracting $x$:

$f'(x) = \lim\limits_{h \to 0} \frac{h}{h}$

As $h$ approaches $0$, but is not equal to $0$ ($h \neq 0$), we can cancel $h$ from the numerator and denominator:

$f'(x) = \lim\limits_{h \to 0} (1)$

$\left[\because \frac{\cancel{h}}{\cancel{h}} = 1\right]$

Since the limit of a constant is the constant itself:

$f'(x) = 1$

Therefore, we have established that:

$\mathbf{\frac{d}{dx}(x) = 1}$

This result shows that the rate of change of $x$ with respect to itself is always 1.


3. Derivative of the Power Function ($x^n$)

The derivative of $x$ raised to the power of $n$ is the product of the exponent $n$ and $x$ raised to the power $(n-1)$. This fundamental rule is widely known as the Power Formula.

Given: A function $f(x) = x^n$, where $n$ is any positive integer ($n \in \mathbb{N}$).

To Prove: $\frac{d}{dx}(x^n) = nx^{n-1}$

Proof:

By the definition of the first principle of derivatives, we have:

$f'(x) = \lim\limits_{h \to 0} \frac{f(x+h) - f(x)}{h}$

Substituting $f(x) = x^n$ and $f(x+h) = (x+h)^n$ into the formula:

$f'(x) = \lim\limits_{h \to 0} \frac{(x+h)^n - x^n}{h}$

... (i)

To evaluate this limit, we can transform the variable. Let us substitute $(x+h) = z$.

As $h \to 0$, it is evident that $(x+h) \to x$, which means $z \to x$.

Also, from $z = x+h$, we can write $h = z - x$.

Substituting these into equation (i), we get:

$f'(x) = \lim\limits_{z \to x} \frac{z^n - x^n}{z - x}$

We know from the standard limit identity that for any positive integer $n$:

$\lim\limits_{z \to a} \frac{z^n - a^n}{z - a} = na^{n-1}$

[Standard Identity]

To Prove: $\lim\limits_{z \to a} \frac{z^n - a^n}{z - a} = na^{n-1}$

Proof:

Let $z = a + h$. As $z \to a$, it follows that $h \to 0$. Substituting these values into the expression:

$\lim\limits_{z \to a} \frac{z^n - a^n}{z - a} = \lim\limits_{h \to 0} \frac{(a + h)^n - a^n}{h}$

Taking $a^n$ common from the numerator:

$= \lim\limits_{h \to 0} \frac{a^n(1 + \frac{h}{a})^n - a^n}{h} = \lim\limits_{h \to 0} \frac{a^n \left[ (1 + \frac{h}{a})^n - 1 \right]}{h}$

Using the Binomial Expansion for $(1 + x)^n = 1 + nx + \frac{n(n-1)}{2!}x^2 + \dots$:

$= \lim\limits_{h \to 0} \frac{a^n \left[ (1 + n(\frac{h}{a}) + \frac{n(n-1)}{2!}(\frac{h}{a})^2 + \dots) - 1 \right]}{h}$

Canceling the $1$ and dividing by $h$:

$= \lim\limits_{h \to 0} a^n \left[ \frac{n}{a} + \frac{n(n-1)}{2!} \cdot \frac{h}{a^2} + \dots \right]$

Applying the limit $h \to 0$, all terms containing $h$ vanish:

$= a^n \cdot \frac{n}{a} = n \cdot a^{n-1}$

Applying this identity to our limit where $a = x$:

$f'(x) = n \cdot x^{n-1}$

Therefore:

$\mathbf{\frac{d}{dx}(x^n) = nx^{n-1}}$

Alternative Method (Using Binomial Theorem)

Let $f(x) = x^n$. By first principles:

$f'(x) = \lim\limits_{h \to 0} \frac{(x+h)^n - x^n}{h}$

Expanding $(x+h)^n$ using the Binomial Theorem:

$(x+h)^n = x^n + n \cdot x^{n-1}h + \frac{n(n-1)}{2!} x^{n-2}h^2 + \dots + h^n$

Substituting this expansion back into the limit:

$f'(x) = \lim\limits_{h \to 0} \frac{[x^n + nx^{n-1}h + \frac{n(n-1)}{2!}x^{n-2}h^2 + \dots + h^n] - x^n}{h}$

Simplifying the numerator by canceling $x^n$:

$f'(x) = \lim\limits_{h \to 0} \frac{nx^{n-1}h + \frac{n(n-1)}{2!}x^{n-2}h^2 + \dots + h^n}{h}$

Dividing each term by $h$:

$f'(x) = \lim\limits_{h \to 0} \left[ nx^{n-1} + \frac{n(n-1)}{2!}x^{n-2}h + \dots + h^{n-1} \right]$

Applying the limit $h \to 0$, all terms containing $h$ become zero:

$f'(x) = nx^{n-1} + 0 + 0 + \dots + 0$

$f'(x) = nx^{n-1}$

REMARK:

Although the proof above is provided for a positive integer $n$, the Power Formula $\frac{d}{dx}(x^n) = nx^{n-1}$ is valid for all rational values of $n$ (i.e., $n \in \mathbb{Q}$), and even for all real numbers.


4. Derivative of a Linear Composite Power Function

This result extends the Power Formula to a linear expression $(ax + b)$ raised to a power $n$. It is a specific application of what is later known as the Chain Rule.

Given: A function $f(x) = (ax + b)^n$, where $n$ is a positive integer ($n \in \mathbb{N}$) and $a, b$ are fixed real numbers.

To Prove: $\frac{d}{dx}(ax + b)^n = n(ax + b)^{n-1} \cdot a$

Proof:

By the definition of the first principle of derivatives:

$f'(x) = \lim\limits_{h \to 0} \frac{f(x+h) - f(x)}{h}$

Substituting $f(x) = (ax + b)^n$ into the formula:

$f'(x) = \lim\limits_{h \to 0} \frac{[a(x + h) + b]^n - (ax + b)^n}{h}$

Expanding the term $a(x + h) + b$ in the numerator:

$f'(x) = \lim\limits_{h \to 0} \frac{(ax + ah + b)^n - (ax + b)^n}{h}$

Rearranging the terms inside the first bracket to group $(ax + b)$ together:

$f'(x) = \lim\limits_{h \to 0} \frac{[(ax + b) + ah]^n - (ax + b)^n}{h}$

To use the standard limit identity $\lim\limits_{z \to c} \frac{z^n - c^n}{z - c} = nc^{n-1}$, we need the change in the denominator to match the change in the base of the power. We multiply and divide the expression by $a$:

$f'(x) = a \cdot \lim\limits_{h \to 0} \frac{[(ax + b) + ah]^n - (ax + b)^n}{ah}$

(Multiplying and dividing by $a$)

Now, let us consider the transformations in the limit:

1. Let $Z = (ax + b) + ah$

2. Let $C = (ax + b)$

3. As $h \to 0$, it implies that $ah \to 0$, which means $Z \to C$.

4. Also, $ah = Z - C$.

Substituting these into our limit expression:

$f'(x) = a \cdot \lim\limits_{Z \to C} \frac{Z^n - C^n}{Z - C}$

Applying the standard limit identity:

$f'(x) = a \cdot (n \cdot C^{n-1})$

$\left[\because \lim\limits_{Z \to C} \frac{Z^n - C^n}{Z - C} = nC^{n-1}\right]$

Replacing $C$ with its original value $(ax + b)$:

$f'(x) = a \cdot n(ax + b)^{n-1}$

Therefore, we conclude:

$\mathbf{\frac{d}{dx}(ax + b)^n = n(ax + b)^{n-1} \cdot a}$

Alternate Solution (Using Binomial Theorem)

Let $f(x) = (ax + b)^n$. By first principles:

$f'(x) = \lim\limits_{h \to 0} \frac{[(ax + b) + ah]^n - (ax + b)^n}{h}$

Let $u = (ax + b)$. Then:

$f'(x) = \lim\limits_{h \to 0} \frac{(u + ah)^n - u^n}{h}$

Expanding $(u + ah)^n$ using the Binomial Theorem:

$(u + ah)^n = u^n + n \cdot u^{n-1}(ah) + \frac{n(n-1)}{2!} u^{n-2}(ah)^2 + \dots$

Substituting the expansion into the limit:

$f'(x) = \lim\limits_{h \to 0} \frac{[u^n + n \cdot u^{n-1}ah + \frac{n(n-1)}{2!}u^{n-2}a^2h^2 + \dots] - u^n}{h}$

Canceling $u^n$ and dividing each term by $h$:

$f'(x) = \lim\limits_{h \to 0} \left[ n \cdot u^{n-1}a + \frac{n(n-1)}{2!}u^{n-2}a^2h + \dots \right]$

As $h \to 0$, all terms containing $h$ become zero:

$f'(x) = n \cdot u^{n-1}a$

Substituting $u = ax + b$ back:

$f'(x) = n(ax + b)^{n-1} \cdot a$

REMARK:

This result is extremely useful for differentiating linear expressions without expanding the bracket. It is true for all rational values of $n$. For example, if we need to differentiate $\sqrt{2x + 3}$, we can treat it as $(2x + 3)^{1/2}$ and apply this formula directly.


5. Algebra of Derivatives

The algebra of derivatives consists of rules that allow us to find the derivative of complex functions by breaking them down into simpler parts. These rules are essential for evaluating derivatives efficiently.

(i) Scalar Multiple Rule

If $f(x)$ is a differentiable function and $c$ is a constant, then the derivative of $c \cdot f(x)$ is $c$ times the derivative of $f(x)$.

To Prove: $\frac{d}{dx}(c \cdot f(x)) = c \cdot f'(x)$

Proof: Let $g(x) = c \cdot f(x)$. By first principles:

$g'(x) = \lim\limits_{h \to 0} \frac{g(x+h) - g(x)}{h}$

$g'(x) = \lim\limits_{h \to 0} \frac{c \cdot f(x+h) - c \cdot f(x)}{h}$

Taking the constant $c$ outside the limit:

$g'(x) = c \cdot \lim\limits_{h \to 0} \frac{f(x+h) - f(x)}{h}$

$g'(x) = c \cdot f'(x)$

[$\because \text{Definition of } f'(x)$]

Thus, the derivative of a scalar multiple of a function is the scalar multiple of the derivative of that function.

(ii) Sum and Difference Rules

The derivative of the sum or difference of two differentiable functions is the sum or difference of their respective derivatives.

To Prove: $\frac{d}{dx}(f(x) \pm g(x)) = f'(x) \pm g'(x)$

Proof for Sum: Let $h(x) = f(x) + g(x)$.

$h'(x) = \lim\limits_{h \to 0} \frac{[f(x+h) + g(x+h)] - [f(x) + g(x)]}{h}$

Rearranging the terms:

$h'(x) = \lim\limits_{h \to 0} \frac{[f(x+h) - f(x)] + [g(x+h) - g(x)]}{h}$

Applying the limit property (limit of sum is sum of limits):

$h'(x) = \lim\limits_{h \to 0} \frac{f(x+h) - f(x)}{h} + \lim\limits_{h \to 0} \frac{g(x+h) - g(x)}{h}$

$\mathbf{h'(x) = f'(x) + g'(x)}$

Similarly, it can be proved for the difference: $\frac{d}{dx}(f(x) - g(x)) = f'(x) - g'(x)$.

(iii) Product Rule (Leibnitz's Rule)

The derivative of the product of two functions is given by: (First function $\times$ Derivative of Second) + (Second function $\times$ Derivative of First).

To Prove: $\frac{d}{dx}(f(x) \cdot g(x)) = f(x)g'(x) + g(x)f'(x)$

Proof: Let $h(x) = f(x)g(x)$.

$h'(x) = \lim\limits_{h \to 0} \frac{f(x+h)g(x+h) - f(x)g(x)}{h}$

To evaluate this, we add and subtract the term $f(x+h)g(x)$ in the numerator:

$h'(x) = \lim\limits_{h \to 0} \frac{f(x+h)g(x+h) - f(x+h)g(x) + f(x+h)g(x) - f(x)g(x)}{h}$

Grouping the terms:

$h'(x) = \lim\limits_{h \to 0} \left[ f(x+h) \frac{g(x+h) - g(x)}{h} + g(x) \frac{f(x+h) - f(x)}{h} \right]$

Since $f$ is differentiable, it is also continuous, so $\lim\limits_{h \to 0} f(x+h) = f(x)$:

$\mathbf{h'(x) = f(x)g'(x) + g(x)f'(x)}$

Extension of Product Rule:

For three functions $f, g,$ and $h$, the derivative is:

$\frac{d}{dx}(fgh) = (fg)h' + (fh)g' + (gh)f'$

(iv) Quotient Rule

The derivative of the quotient of two functions is the (denominator times derivative of numerator minus numerator times derivative of denominator) divided by the square of the denominator.

To Prove: $\frac{d}{dx}\left(\frac{f(x)}{g(x)}\right) = \frac{g(x)f'(x) - f(x)g'(x)}{[g(x)]^2}$

Proof: Let $h(x) = \frac{f(x)}{g(x)}$.

$h'(x) = \lim\limits_{h \to 0} \frac{\frac{f(x+h)}{g(x+h)} - \frac{f(x)}{g(x)}}{h}$

Taking the LCM in the numerator:

$h'(x) = \lim\limits_{h \to 0} \frac{f(x+h)g(x) - f(x)g(x+h)}{h \cdot g(x+h) \cdot g(x)}$

Subtracting and adding $f(x)g(x)$ in the numerator:

$h'(x) = \lim\limits_{h \to 0} \frac{[f(x+h)g(x) - f(x)g(x)] - [f(x)g(x+h) - f(x)g(x)]}{h \cdot g(x+h) \cdot g(x)}$

Rearranging:

$h'(x) = \lim\limits_{h \to 0} \frac{g(x) \left[ \frac{f(x+h) - f(x)}{h} \right] - f(x) \left[ \frac{g(x+h) - g(x)}{h} \right]}{g(x+h) \cdot g(x)}$

Applying the limits as $h \to 0$:

$h'(x) = \frac{g(x)f'(x) - f(x)g'(x)}{[g(x)]^2}$

[$\because \lim\limits_{h \to 0} g(x+h) = g(x)$]           ... (i)


6. Derivative of a Function Raised to a Power (Generalized Power Rule)

In this section, we establish the rule for differentiating a function $f(x)$ raised to a positive integer power $n$. This is a generalization of the power formula $x^n$, proved using the Principle of Mathematical Induction.

To Prove: $\frac{d}{dx}([f(x)]^n) = n[f(x)]^{n-1} \cdot f'(x)$, for all $n \in \mathbb{N}$.

Proof:

Let the given statement be $P(n): \frac{d}{dx}([f(x)]^n) = n[f(x)]^{n-1} \cdot f'(x)$.

Step 1: Check for $n = 1$

L.H.S. = $\frac{d}{dx}([f(x)]^1) = \frac{d}{dx}(f(x)) = f'(x)$

R.H.S. = $1 \cdot [f(x)]^{1-1} \cdot f'(x) = 1 \cdot [f(x)]^0 \cdot f'(x) = 1 \cdot 1 \cdot f'(x) = f'(x)$

Since L.H.S. = R.H.S., the result is true for $n = 1$.

Step 2: Assume the result is true for $n = m$

Let us assume that $P(m)$ is true, where $m$ is some positive integer.

$\frac{d}{dx}([f(x)]^m) = m[f(x)]^{m-1} \cdot f'(x)$

... (i)

Step 3: Prove the result for $n = m + 1$

We need to show that $\frac{d}{dx}([f(x)]^{m+1}) = (m+1)[f(x)]^m \cdot f'(x)$.

Starting with the L.H.S.:

$\frac{d}{dx}([f(x)]^{m+1}) = \frac{d}{dx}([f(x)]^m \cdot f(x))$

Applying the Product Rule of differentiation:

$=[f(x)]^m \cdot \frac{d}{dx}(f(x)) + f(x) \cdot \frac{d}{dx}([f(x)]^m)$

[Product Rule]

Substituting the value from our assumption in equation (i):

$=[f(x)]^m \cdot f'(x) + f(x) \cdot \{m[f(x)]^{m-1} \cdot f'(x) \}$

[Using (i)]

Simplifying the second term using laws of exponents ($a^1 \cdot a^{m-1} = a^m$):

$= [f(x)]^m \cdot f'(x) + m[f(x)]^m \cdot f'(x)$

Taking the common factor $[f(x)]^m \cdot f'(x)$ out:

$= (1 + m) [f(x)]^m \cdot f'(x)$

$= (m + 1) [f(x)]^m \cdot f'(x)$

This is exactly the form of the result for $n = m+1$.

Conclusion:

Since the result is true for $n = 1$ and its truth for $n = m$ implies its truth for $n = m + 1$, by the Principle of Mathematical Induction, the result is true for all positive integers $n \in \mathbb{N}$.

$\mathbf{\frac{d}{dx}([f(x)]^n) = n[f(x)]^{n-1} \cdot f'(x)}$


7. Derivative of Polynomial Functions

A polynomial function is an expression consisting of variables and coefficients, involving only the operations of addition, subtraction, multiplication, and non-negative integer exponents of variables. The derivative of a polynomial is found by applying the sum rule, scalar multiple rule, and the power formula to each term individually.

Given: Let $f(x)$ be a polynomial function of degree $n$ defined as:

$f(x) = a_0x^n + a_1x^{n-1} + a_2x^{n-2} + \dots + a_{n-1}x + a_n$

Where $a_0, a_1, a_2, \dots, a_n$ are fixed real numbers (coefficients) and $n$ is a positive integer.

To Prove: $f'(x) = na_0x^{n-1} + (n - 1)a_1x^{n-2} + (n - 2)a_2x^{n-3} + \dots + a_{n-1}$

Proof:

To find the derivative of the polynomial, we differentiate the function with respect to $x$:

$f'(x) = \frac{d}{dx} [a_0x^n + a_1x^{n-1} + a_2x^{n-2} + \dots + a_{n-1}x + a_n]$

By applying the Sum Rule of derivatives, we can distribute the derivative operator over each term:

$f'(x) = \frac{d}{dx}(a_0x^n) + \frac{d}{dx}(a_1x^{n-1}) + \frac{d}{dx}(a_2x^{n-2}) + \dots + \frac{d}{dx}(a_{n-1}x) + \frac{d}{dx}(a_n)$

Now, applying the Scalar Multiple Rule, we take the constant coefficients outside the derivative:

$f'(x) = a_0 \frac{d}{dx}(x^n) + a_1 \frac{d}{dx}(x^{n-1}) + a_2 \frac{d}{dx}(x^{n-2}) + \dots + a_{n-1} \frac{d}{dx}(x) + \frac{d}{dx}(a_n)$

Using the Power Formula $\frac{d}{dx}(x^n) = nx^{n-1}$ and the Constant Rule $\frac{d}{dx}(c) = 0$ for each term:

$\frac{d}{dx}(x^n) = nx^{n-1}$

[Power Formula]

$\frac{d}{dx}(x) = 1$

[Derivative of $x$]

$\frac{d}{dx}(a_n) = 0$

[Derivative of Constant]

Substituting these results back into the expression for $f'(x)$:

$f'(x) = a_0(nx^{n-1}) + a_1((n-1)x^{n-2}) + a_2((n-2)x^{n-3}) + \dots + a_{n-1}(1) + 0$

On simplifying, we get:

$\mathbf{f'(x) = na_0x^{n-1} + (n-1)a_1x^{n-2} + (n-2)a_2x^{n-3} + \dots + a_{n-1}}$


Example of Polynomial Differentiation

Let $f(x) = 5x^4 - 3x^2 + 7x - 12$. Find $f'(x)$.

Solution:

Differentiating each term:

$f'(x) = \frac{d}{dx}(5x^4) - \frac{d}{dx}(3x^2) + \frac{d}{dx}(7x) - \frac{d}{dx}(12)$

$f'(x) = 5(4x^3) - 3(2x^1) + 7(1) - 0$

$f'(x) = 20x^3 - 6x + 7$

This demonstrates that the derivative of a polynomial is always another polynomial of one degree less than the original (provided the original degree $n \geq 1$).


8. Derivatives of Trigonometric Functions

Trigonometric functions are periodic and continuous within their domains. Their derivatives are foundational for solving problems in calculus, physics, and engineering. We derive these results using the First Principle of Derivatives and standard trigonometric identities.

(i) Derivative of $\sin x$

To Prove: $\frac{d}{dx}(\sin x) = \cos x$

Proof: Let $f(x) = \sin x$. By first principles:

$f'(x) = \lim\limits_{h \to 0} \frac{\sin(x+h) - \sin x}{h}$

Using the identity $\sin C - \sin D = 2 \cos\left(\frac{C+D}{2}\right) \sin\left(\frac{C-D}{2}\right)$:

$f'(x) = \lim\limits_{h \to 0} \frac{2 \cos(x + \frac{h}{2}) \sin(\frac{h}{2})}{h}$

[Sum-to-Product Identity]

Rearranging the expression to utilize the standard limit $\lim\limits_{\theta \to 0} \frac{\sin \theta}{\theta} = 1$:

$f'(x) = \lim\limits_{h \to 0} \left[ \cos(x + \frac{h}{2}) \cdot \frac{\sin(h/2)}{h/2} \right]$

Applying the limit as $h \to 0$:

$f'(x) = \cos(x + 0) \cdot 1$

[$\because \lim\limits_{h \to 0} \frac{\sin(h/2)}{h/2} = 1$]

$\mathbf{\frac{d}{dx}(\sin x) = \cos x}$

(ii) Derivative of $\cos x$

To Prove: $\frac{d}{dx}(\cos x) = -\sin x$

Proof: Let $f(x) = \cos x$. By first principles:

$f'(x) = \lim\limits_{h \to 0} \frac{\cos(x+h) - \cos x}{h}$

Using the identity $\cos C - \cos D = -2 \sin\left(\frac{C+D}{2}\right) \sin\left(\frac{C-D}{2}\right)$:

$f'(x) = \lim\limits_{h \to 0} \frac{-2 \sin(x + \frac{h}{2}) \sin(\frac{h}{2})}{h}$

[Sum-to-Product Identity]

Rearranging:

$f'(x) = \lim\limits_{h \to 0} \left[ -\sin(x + \frac{h}{2}) \cdot \frac{\sin(h/2)}{h/2} \right]$

Applying the limit as $h \to 0$:

$f'(x) = -\sin(x + 0) \cdot 1 = -\sin x$

$\mathbf{\frac{d}{dx}(\cos x) = -\sin x}$

(iii) Derivative of $\tan x$

The tangent function, $\tan x$, is defined as the ratio of the sine function to the cosine function. To find its derivative, we utilize the Quotient Rule of differentiation, which is used when a function is expressed as the division of two differentiable functions.

To Prove: $\frac{d}{dx}(\tan x) = \sec^2 x$

Given: $f(x) = \tan x$

Proof:

First, we express $\tan x$ in terms of $\sin x$ and $\cos x$:

$f(x) = \frac{\sin x}{\cos x}$

... (i)

We now apply the Quotient Rule, which states that for a function $h(x) = \frac{u(x)}{v(x)}$:

$\frac{d}{dx} \left( \frac{u}{v} \right) = \frac{v \cdot \frac{du}{dx} - u \cdot \frac{dv}{dx}}{v^2}$

Let us define $u$ and $v$ from our equation (i):

$u = \sin x$

(Numerator)

$v = \cos x$

(Denominator)

Now, we find their individual derivatives:

$\frac{du}{dx} = \cos x$

[$\because \frac{d}{dx} \sin x = \cos x$]

$\frac{dv}{dx} = -\sin x$

[$\because \frac{d}{dx} \cos x = -\sin x$]

Substituting these values into the Quotient Rule formula:

$f'(x) = \frac{(\cos x) \cdot (\cos x) - (\sin x) \cdot (-\sin x)}{(\cos x)^2}$

Simplifying the numerator by multiplying the terms:

$f'(x) = \frac{\cos^2 x - (-\sin^2 x)}{\cos^2 x}$

$f'(x) = \frac{\cos^2 x + \sin^2 x}{\cos^2 x}$

Applying the Fundamental Trigonometric Identity:

$\cos^2 x + \sin^2 x = 1$

(Pythagorean Identity)

Replacing the numerator with $1$:

$f'(x) = \frac{1}{\cos^2 x}$

Since the reciprocal of cosine is secant ($\frac{1}{\cos x} = \sec x$):

$f'(x) = \sec^2 x$

Therefore, we have established that:

$\mathbf{\frac{d}{dx}(\tan x) = \sec^2 x}$

Domain and Constraints

The function $f(x) = \tan x$ is not defined where $\cos x = 0$. This occurs at odd multiples of $\frac{\pi}{2}$. Therefore, the derivative $\sec^2 x$ is valid for all $x \in \mathbb{R}$ except:

$x = (2n + 1)\frac{\pi}{2}$, where $n \in \mathbb{Z}$.

Alternative Proof (Using First Principles)

Let $f(x) = \tan x$. By definition:

$f'(x) = \lim\limits_{h \to 0} \frac{\tan(x + h) - \tan x}{h}$

Using the identity $\tan A - \tan B = \frac{\sin(A - B)}{\cos A \cos B}$:

$f'(x) = \lim\limits_{h \to 0} \frac{\sin(x + h - x)}{h \cos(x + h) \cos x}$

$f'(x) = \lim\limits_{h \to 0} \left[ \frac{\sin h}{h} \cdot \frac{1}{\cos(x + h) \cos x} \right]$

Applying the standard limit $\lim\limits_{h \to 0} \frac{\sin h}{h} = 1$:

$f'(x) = 1 \cdot \frac{1}{\cos x \cdot \cos x} = \frac{1}{\cos^2 x} = \sec^2 x$

(iv) Derivative of $\cot x$

The cotangent function, $\cot x$, is the reciprocal of the tangent function and is defined as the ratio of $\cos x$ to $\sin x$. Its derivative is the negative square of the cosecant function.

To Prove: $\frac{d}{dx}(\cot x) = -\text{cosec}^2 x$

Proof (Using Quotient Rule):

Let $f(x) = \cot x$. We can write this as:

$f(x) = \frac{\cos x}{\sin x}$

... (i)

Using the Quotient Rule $\frac{d}{dx}\left(\frac{u}{v}\right) = \frac{v \frac{du}{dx} - u \frac{dv}{dx}}{v^2}$:

Let $u = \cos x$ and $v = \sin x$. Then:

$\frac{du}{dx} = -\sin x$

$\frac{dv}{dx} = \cos x$

Substituting these into the quotient rule formula:

$f'(x) = \frac{(\sin x)(-\sin x) - (\cos x)(\cos x)}{(\sin x)^2}$

$f'(x) = \frac{-\sin^2 x - \cos^2 x}{\sin^2 x}$

Taking the negative sign as a common factor in the numerator:

$f'(x) = \frac{-(\sin^2 x + \cos^2 x)}{\sin^2 x}$

[$\because \sin^2 x + \cos^2 x = 1$]

$f'(x) = \frac{-1}{\sin^2 x}$

Since $\frac{1}{\sin x} = \text{cosec } x$, we get:

$\mathbf{\frac{d}{dx}(\cot x) = -\text{cosec}^2 x}$

Proof (Using First Principles):

By definition, $f'(x) = \lim\limits_{h \to 0} \frac{\cot(x+h) - \cot x}{h}$. Converting to sine and cosine:

$f'(x) = \lim\limits_{h \to 0} \frac{\frac{\cos(x+h)}{\sin(x+h)} - \frac{\cos x}{\sin x}}{h}$

$f'(x) = \lim\limits_{h \to 0} \frac{\sin x \cos(x+h) - \cos x \sin(x+h)}{h \sin(x+h) \sin x}$

Using the identity $\sin A \cos B - \cos A \sin B = \sin(A - B)$, where $A = x$ and $B = x+h$:

$f'(x) = \lim\limits_{h \to 0} \frac{\sin(x - (x+h))}{h \sin(x+h) \sin x}$

[$\sin(A-B)$ Identity]

$f'(x) = \lim\limits_{h \to 0} \frac{\sin(-h)}{h \sin(x+h) \sin x}$

Since $\sin(-h) = -\sin h$:

$f'(x) = - \left( \lim\limits_{h \to 0} \frac{\sin h}{h} \right) \cdot \lim\limits_{h \to 0} \frac{1}{\sin(x+h) \sin x}$

$f'(x) = -1 \cdot \frac{1}{\sin x \cdot \sin x}$

[$\because \lim\limits_{h \to 0} \frac{\sin h}{h} = 1$]

$\mathbf{f'(x) = -\text{cosec}^2 x}$

Domain Considerations

The derivative of $\cot x$ is valid for all $x \in \mathbb{R}$ except $x = n\pi$ (where $\sin x = 0$).

(v) Derivative of $\sec x$

The secant function, $\sec x$, is defined as the reciprocal of the cosine function. Its derivative is the product of the secant and tangent functions. This result is essential for calculating rates of change in trigonometric models and for evaluating complex integrals.

To Prove: $\frac{d}{dx}(\sec x) = \sec x \tan x$

Given: $f(x) = \sec x$

Proof (Using Quotient Rule):

First, we express the secant function as the reciprocal of the cosine function:

$f(x) = \frac{1}{\cos x}$

We use the Quotient Rule of differentiation:

$\frac{d}{dx} \left( \frac{u}{v} \right) = \frac{v \frac{du}{dx} - u \frac{dv}{dx}}{v^2}$

Let the functions be defined as:

$u = 1$

(Numerator)

$v = \cos x$

(Denominator)

Differentiating $u$ and $v$ with respect to $x$:

$\frac{du}{dx} = 0$

[$\because$ Derivative of a constant is 0]

$\frac{dv}{dx} = -\sin x$

[$\because \frac{d}{dx} \cos x = -\sin x$]

Substituting these values into the Quotient Rule formula:

$f'(x) = \frac{(\cos x) \cdot (0) - (1) \cdot (-\sin x)}{(\cos x)^2}$

Simplifying the numerator:

$f'(x) = \frac{0 + \sin x}{\cos^2 x}$

$f'(x) = \frac{\sin x}{\cos^2 x}$

To obtain the standard form, we decompose the fraction:

$f'(x) = \frac{1}{\cos x} \cdot \frac{\sin x}{\cos x}$

Using the trigonometric definitions $\frac{1}{\cos x} = \sec x$ and $\frac{\sin x}{\cos x} = \tan x$:

$f'(x) = \sec x \tan x$

Thus, we have established that:

$\mathbf{\frac{d}{dx}(\sec x) = \sec x \tan x}$

Alternative Proof (Using First Principles)

By the definition of the first principle of derivatives:

$f'(x) = \lim\limits_{h \to 0} \frac{\sec(x+h) - \sec x}{h}$

Expressing the secant function in terms of cosine:

$f'(x) = \lim\limits_{h \to 0} \frac{\frac{1}{\cos(x+h)} - \frac{1}{\cos x}}{h}$

Taking the LCM in the numerator:

$f'(x) = \lim\limits_{h \to 0} \frac{\cos x - \cos(x+h)}{h \cos(x+h) \cos x}$

Using the sum-to-product identity $\cos C - \cos D = 2 \sin\left(\frac{C+D}{2}\right) \sin\left(\frac{D-C}{2}\right)$:

$f'(x) = \lim\limits_{h \to 0} \frac{2 \sin\left(\frac{x+h+x}{2}\right) \sin\left(\frac{x+h-x}{2}\right)}{h \cos(x+h) \cos x}$

$f'(x) = \lim\limits_{h \to 0} \frac{2 \sin\left(x + \frac{h}{2}\right) \sin\left(\frac{h}{2}\right)}{h \cos(x+h) \cos x}$

Rearranging the terms to use the standard limit $\lim\limits_{\theta \to 0} \frac{\sin \theta}{\theta} = 1$:

$f'(x) = \lim\limits_{h \to 0} \left[ \frac{\sin\left(x + \frac{h}{2}\right)}{\cos(x+h) \cos x} \cdot \frac{\sin(h/2)}{h/2} \right]$

Applying the limit as $h \to 0$:

$f'(x) = \frac{\sin(x + 0)}{\cos(x + 0) \cos x} \cdot 1$

[$\because \lim\limits_{h \to 0} \frac{\sin(h/2)}{h/2} = 1$]

$f'(x) = \frac{\sin x}{\cos^2 x} = \sec x \tan x$

Domain and Vertical Asymptotes

The function $f(x) = \sec x$ and its derivative $\sec x \tan x$ are not defined where $\cos x = 0$. This occurs at odd multiples of $\pi/2$:

$x = (2n + 1)\frac{\pi}{2}$, where $n \in \mathbb{Z}$.

Geometrically, at these values, the function has vertical asymptotes, and the slope of the curve is undefined.

(vi) Derivative of $\text{cosec } x$

The cosecant function, $\text{cosec } x$, is the reciprocal of the sine function. Its derivative is the negative product of the cosecant and cotangent functions. Like other trigonometric derivatives, this can be derived using the Quotient Rule or from First Principles.

To Prove: $\frac{d}{dx}(\text{cosec } x) = -\text{cosec } x \cot x$

Given: $f(x) = \text{cosec } x$

Proof (Using Quotient Rule):

First, we express the cosecant function in terms of the sine function:

$f(x) = \frac{1}{\sin x}$

... (i)

We use the Quotient Rule of differentiation, which states:

$\frac{d}{dx} \left( \frac{u}{v} \right) = \frac{v \frac{du}{dx} - u \frac{dv}{dx}}{v^2}$

Let the numerator and denominator be defined as:

$u = 1$

(Constant)

$v = \sin x$

(Trigonometric function)

Differentiating $u$ and $v$ with respect to $x$:

$\frac{du}{dx} = 0$

[Derivative of constant]

$\frac{dv}{dx} = \cos x$

[$\because \frac{d}{dx} \sin x = \cos x$]

Substituting these derivatives into the Quotient Rule formula:

$f'(x) = \frac{(\sin x) \cdot (0) - (1) \cdot (\cos x)}{(\sin x)^2}$

Simplifying the numerator:

$f'(x) = \frac{0 - \cos x}{\sin^2 x}$

$f'(x) = \frac{-\cos x}{\sin^2 x}$

To reach the standard identity, we split the denominator:

$f'(x) = -\left( \frac{1}{\sin x} \cdot \frac{\cos x}{\sin x} \right)$

Applying the trigonometric definitions $\frac{1}{\sin x} = \text{cosec } x$ and $\frac{\cos x}{\sin x} = \cot x$:

$f'(x) = -\text{cosec } x \cot x$

... (ii)

Thus, we conclude:

$\mathbf{\frac{d}{dx}(\text{cosec } x) = -\text{cosec } x \cot x}$

Alternative Solution (Using First Principles)

By the definition of the first principle:

$f'(x) = \lim\limits_{h \to 0} \frac{\text{cosec}(x+h) - \text{cosec } x}{h}$

Expressing in terms of sine:

$f'(x) = \lim\limits_{h \to 0} \frac{\frac{1}{\sin(x+h)} - \frac{1}{\sin x}}{h}$

Taking the LCM in the numerator:

$f'(x) = \lim\limits_{h \to 0} \frac{\sin x - \sin(x+h)}{h \sin(x+h) \sin x}$

Using the sum-to-product identity $\sin C - \sin D = 2 \cos\left(\frac{C+D}{2}\right) \sin\left(\frac{C-D}{2}\right)$:

$f'(x) = \lim\limits_{h \to 0} \frac{2 \cos\left(\frac{x + x+h}{2}\right) \sin\left(\frac{x - (x+h)}{2}\right)}{h \sin(x+h) \sin x}$

$f'(x) = \lim\limits_{h \to 0} \frac{2 \cos\left(x + \frac{h}{2}\right) \sin\left(-\frac{h}{2}\right)}{h \sin(x+h) \sin x}$

Since $\sin(-\theta) = -\sin \theta$:

$f'(x) = \lim\limits_{h \to 0} \frac{-2 \cos\left(x + \frac{h}{2}\right) \sin\left(\frac{h}{2}\right)}{h \sin(x+h) \sin x}$

Rearranging to form the standard limit $\lim\limits_{\theta \to 0} \frac{\sin \theta}{\theta} = 1$:

$f'(x) = \lim\limits_{h \to 0} \left[ -\frac{\cos\left(x + \frac{h}{2}\right)}{\sin(x+h) \sin x} \cdot \frac{\sin(h/2)}{h/2} \right]$

Applying the limit $h \to 0$:

$f'(x) = -\frac{\cos x}{\sin x \cdot \sin x} \cdot 1$

[$\because \lim\limits_{h \to 0} \frac{\sin(h/2)}{h/2} = 1$]

$f'(x) = - \frac{\cos x}{\sin^2 x} = -\text{cosec } x \cot x$

Domain and Vertical Asymptotes

The function $f(x) = \text{cosec } x$ and its derivative are not defined where $\sin x = 0$. This occurs at integral multiples of $\pi$:

$x = n\pi$, where $n \in \mathbb{Z}$.

At these points, the graph of the cosecant function has vertical asymptotes, and the rate of change is undefined.


Summary Table of Trigonometric Derivatives

Function $f(x)$ Derivative $f'(x)$ Condition/Domain
$\sin x$ $\cos x$ $x \in \mathbb{R}$
$\cos x$ $-\sin x$ $x \in \mathbb{R}$
$\tan x$ $\sec^2 x$ $x \neq (2n+1)\frac{\pi}{2}$
$\cot x$ $-\text{cosec}^2 x$ $x \neq n\pi$
$\sec x$ $\sec x \tan x$ $x \neq (2n+1)\frac{\pi}{2}$
$\text{cosec } x$ $-\text{cosec } x \cot x$ $x \neq n\pi$