Upper and Lower Bounds on New F-Divergence in Terms of Relative J-Divergence Measure

During past years Dragomir, Taneja and Pranesh kumar have contributed a lot of work providing different kinds of bounds on the distance, information and divergence measures. These are very useful and play an important role in many areas like as Sensor Networks, Testing the order in a Markov chain, Risk for binary experiments, Region segmentation and estimation etc. In this paper, we have established an upper and lower bounds of new f-divergence measure in terms of Relative J-divergence measure. Its particular cases have also considered using a new f-divergence and inequalities


Introduction
be the set of all complete finite discrete probability distributions. There are many information and divergence measures are exist in the literature of information theory and statistics. Csiszar [1,2] introduced a generalized measure of information using f-divergence measure given by ∞ →  f is a convex function and , ∈ Γ n P Q The Csiszar's f-divergence is a general class of divergence measures that includes several divergences used in measuring the distance or affinity between two probability distributions. This class is introduced by using a convex function f, defined on (0, ∞). An important property of this divergence is that many known divergences can be obtained from this measure by appropriately defining the convex function f. There are some examples of divergence measures in the category of Csiszar's f divergence measure. Bhattacharya divergence [3], Triangular discrimination [4], Relative J-divergence [5], Hellinger discrimination [6], Chi-square divergence [7], Relative Jensen-Shannon divergence [8], Relative arithmetic-geometric divergence [9], Unified relative Jensen-Shannon and arithmetic-geometric divergence of types [9]. In whole paper, we shall derive some well known divergence measures with help of new f-divergence measure. An inequality between new f divergence and Relative J-divergence measure has established which is shown below. Bounds of well known divergence measure in terms of Relative J-divergence measure have studied below. Numerical bounds of information divergence measure have also studied.

New f-Divergence Measure and Its Particular Cases
Given a function , a new f-divergence measure introduced by Jain and Saraswat [10,11] 1 ( , ) 2 Relative Jensen-Shannon divergence measure: If f (t)=log t then relative Jensen-Shannon divergence measure is given by Relative arithmetic-geometric divergence measure: If f (t)=t log t then relative arithmetic-geometric divergence measure is given by then Triangular discrimination is given by then Hellinger discrimination is given by Unified relative Jensen-Shannon and arithmetic-geometric Then Unified relative Jensen-Shannon and Arithmetic-Geometric divergence measure of type α is given by

Inequality among New F-Divergence and Relative J-Divergence Measure
In the following theorem we have obtained an inequality between a new f-divergence measure and Relative J-divergence measure. The results are on similar lines to the result presented by Dragomir [12] and Jain and Saraswat [13,11,14]. ∞ →  f is normalized i.e. f (1)=0 and satisfies the assumptions.
If P, Q are discrete probability distributions satisfying the assumptions Then we have the inequality Proof: Define a mapping: Then F m (.) is normalized and twice differentiable, since For all t ∈ (r,R), it follows that F m (.) is convex on (r,R). Applying non-negativity property of new f-divergence measure for F m (.) and the linearity property, we may state that

Some Particular Cases
Using the result (3.3) of Theorem (3.1) we able to point out the following particular cases which may be interest in Information Theory and Statistics.
Hence function f is decreasing, then we get

Result 6.2: Let,
∈ Γ P Q n be two probability distribution satisfying (3.2), then we have the following relation Proof: Consider the mapping So function f is convex and normalized i.e.
Hence function f is increasing, then we get Hence function f is increasing, then obviously   ( 1) , so function f is convex and normalized.
Then obviously Result 6.5: Let, ∈ Γ P Q n be two probability distributions satisfying (3.2), and then we have the following relations Proof: Consider the mapping : ( , ) →  f r R and from equation (2.14), then we get So function f is convex and normalized i.e. f (1) = 0      The width of the interval is 2.150173.