A New Inequality on Jain-Saraswat’s Divergence for S -Convex Functions

: In this article, a new inequality on Jain-Saraswat divergence measure is investigated for s -convex functions, which includes convex functions as a special case. Further, by using this inequality, some special results have also been derived in terms of the different divergences, at distinct values of s . Numerical verification of these results has also been discussed.


Introduction
Divergence measures are basically measures of distance between two or more than two probability distributions or it is a measure of discrimination between probability distributions.Any arbitrary divergence measure Ar(Θ, Φ) represents a natural distance measure from a true probability distribution Θ to an arbitrary probability distribution Φ.Typically, Θ represents an observation or a precisely calculated probability distribution, whereas Φ represents a model, a description, or an approximation of Θ.
Divergence measures are used effectively to resolve different problems in probability theory.The primary purpose of assessing how much information is contained in data is to quantify the amount of meaningful and useful content present in a given set of data.This assessment helps us understand the significance, relevance, and potential insights that can be derived from the data.In other words, it allows us to gauge the richness and value of the data in terms of the knowledge it can provide.
These all are functional or generalized divergence measures for comparing two discrete probability distributions Θ and Φ, at a time, (Θ, Φ) ∈ γ × γ.
We can obtain several well known divergences by defining a suitable convex function in one of these generalized divergences, like Csiszar's divergence is very useful for generationg different divergences due to its compact formula.
We may say that Csiszar divergence behaves like a generator of divergences by using the appropriate convex function as a generating function.
Similarly, in 2013 [22] Jain and Saraswat introduced the following generalized divergence measure: where θ r and r φ are probability mass functions corresponding to the discrete distributions Θ and Φ, respectively.
In addition, the article [27] reveals the following relationship: In this work, we will use the following generalized means (m-Logarithmic power mean (4) and Identric mean (5)), to summarize the long calculations. ( (3) ( ) if The Definition 1.1, Remark 1.2 and Theorem 1.4 below can be found in the article [30].Definition 1.1 Let Z be a linear space and s be a fixed positive real number, i.e., s ∈ (0, ∞).Let B ⊂ Z be a convex subset.Then, the mapping : (b).If 0 < s ≤ 1, every non-negative convex function defined on a convex set in a linear space is also an s-convex function.If s ≥ 1, every non-positive convex function defined on a convex set in a linear space is also an s-convex function.

Theorem 1.4 Let :
h B →  be a s-convex function, and ∑ with δ r ≥ 0, for any 1 ≤ r ≤ p. Then we have ( ) for all λ r ∈ B ⊂ Z, where Z is linear space.

New inequality on jain-saraswat's divergence
Now, we will derive the following new inequality on S h (Θ, Φ) for s-convex functions.This inequality will further illustrate the relations in terms of different information divergences.

Some special results
By using the inequalities ( 2) and ( 9) together, we may have the significant results in terms of the different divergences, like: Proposition 3.1 gives the result in terms of the Triangular Discrimination.
The function is strictly convex by definition because h′′(u) > 0 ∀ u > 0. Also, for the function h(u) = 2(u − 1)logu, we have (10) which is the Relative J-divergence measure [17] and So, we get the desired result (11) by using the inequalities ( 2) and ( 9), after a small simplification.
By using the same procedure, we have the following results For u ∈ (0, ∞) and s ∈ (0, 1], by omitting the proofs: is the Chi-square divergence measure [18]. Proposition 3.5 For the function ( ) ( ) ) , is the Root mean square divergence measure [32] and ) is the Contra harmonic mean divergence measure [32].
Similarly we have the following cases for s ∈ (0, ∞) by skipping the detail proof: Proposition 3.18 For the function h(u) = ulogu, u > 0 and taking into consideration the Remark 1.
is the KL divergence measure [21] and 20 For the function ( ) ( ) ( ) > and taking into consideration the Remark 1.2 (b), is the Jain-Chhabra divergence measure [27] and

Verification of the results
In order to be sure that the obtained results are authentic, it is necessary to take appropriate data.Also, we cannot take into account all 20 results for this process, so we will take only three results from each case.The remaining results can be verified using the same procedure.Also, it is not possible to validate the outcomes at each value of s for the given domain, so we fix the value of s as 1 2 .Of course, a similar procedure can be used for other values of s.
Let us have two discrete probability distributions Θ (Binomial) and Φ (Poisson), with finite number of trials (N = 10), probability of success of one trial (θ = 0.7).So the probability of failure of one trial will be ϕ = 1 − θ = 1 − 0.7 = 0.3 and the Poisson parameter will be Nθ = 10 × 0.7 = 7.The Binomial distribution represents real information, while the Poisson distribution shows its approximated form.Now, by using the probability mass function of Binomial distribution ( ) we have the following evaluation for the random variable T: ≤ < ∞ so we can conclude from the Table 1 that the values of and κ ζ will be 0.503 and 1.396, respectively.Also, by using the data from the Now, put the data from the equations ( 30), ( 33) and ( 34), together with the values of and κ ζ, into the inequality (12)   at s = 1 2 , we have ( )( )

Conclusion
Several articles have defined the information inequality using divergence measures on different convex functions, but in this article the inequality is defined on s-convex functions and novel results are found in terms of different wellknown divergence measures.The author believes that these results have significant implications for information theory at different levels.These implications include signal processing, statistical data analysis, pattern recognition, analysis of contingency tables, testing of statistical hypotheses, and others.

Proposition 3 . 19
the Relative AG divergence measure[19].For the function h(u) = (2u − 1)log(2u − 1), u > 1 2 and taking into consideration the Remark 1.2 (b), the function h the values from the equations(31),(33) and(34), together with the values of and the values from the equations (32), (33) and (34), together with the values of and the results (12),(20), and (29).Remark 4.1 a.The results can be verified at other values of the number of trials (N ) and probability of success of one trial (θ).b.The results can be verified by taking other discrete probability distributions.