Question Details

(solution) Explain, in your own words (without going into mathematical

Explain, in your own words (without going into mathematical details, although you may use the odd equation for illustrative purposes), the main features of the Heisenberg-Born-Jordan atom, as described by their new theory of Quantum Mechanics, using quotes from the papers of Born and Jordan and the analysis of that paper by Fedak and Prentis. The explanation should include: what are the new variables describing the states of the atom? What are their fundamental properties? How do they link to observables? The two papers are attached.

Also add references

1925 M. Born, Z. Phys. 34, 858 On Quantum Mechanics M. Born


Received 1925


?? ? ??


Translation into English: Sources of Quantum Mechanics,


Ed. by B. L. van der Waerden, North Holland, Amsterdam (1967) 277.


?? ? ?? The recently published theoretical approach of Heisenberg is here developed


into a systematic theory of quantum mechanics (in the first place for systems having one degree of freedom) with the aid of mathematical matrix methods. After


a brief survey of the latter, the mechanical equations of motion are derived from


a variational principle and it is shown that using Heisenberg?s quantum condition,


the principle of energy conservation and Bohr?s frequency condition follow from the


mechanical equations. Using the anharmonic oscillator as example, the question of


uniqueness of the solution and of the significance of the phases of the partial vibrations is raised. The paper concludes with an attempt to incorporate electromagnetic


field laws into the new theory. Introduction


The theoretical approach of Heisenberg 1 recently published in this Journal,


which aimed at setting up a new kinematical and mechanical formalism in


conformity with the basic requirements of quantum theory, appears to us


of considerable potential significance. It represents an attempt to render


1 W.Heisenberg, Zs. f. Phys. 33 (1925) 879. 1 justice to the new facts by selling up a new and really suitable conceptual


system instead of adapting the customary conceptions in a more or less artificial and forced manner. The physical reasoning which led Heisenberg


to this development has been so clearly described by him that any supplementary remarks appear superfluous. But, as he himself indicates, in its


formal, mathematical aspects his approach is but in its initial stages. His


hypotheses have been applied only to simple examples without being fully


carried through to a generalized theory. Having been in an advantageous


position to familiarize ourselves with his ideas throughout their formative


stages, we now strive (since his investigations have been concluded) to clarify the mathematically formal content of his approach and present some of


our results here. These indicate that it is in fact possible, starting with


the basic premises given by Heisenberg, to build up a closed mathematical theory of quantum mechanics which displays strikingly close analogies


with classical mechanics, but at the same time preserves the characteristic


features of quantum phenomena.


In this we at first confine ourselves, like Heisenberg, to systems having one degree of freedom and assume these to be ? from a classical


standpoint ? periodic. We shall in the continuation of this publication concern ourselves with the generalization of the mathematical theory to systems having ah arbitrary number of degrees of freedom, as also to aperiodic


motion. A noteworthy generalization of Heisenberg?s approach lies in our


confining ourselves neither to treatment of nonrelativistic mechanics nor to


calculations involving Cartesian systems of coordinates. The only restriction


which we impose upon the choice of coordinates is to base our considerations


upon libration coordinates, which in classical theory are periodic functions


of time. Admittedly, in some instances it might be more reasonable to


employ other coordinates: for example, in the case of a rotating body to


introduce the angle of rotation ?, which becomes a linear function of time.


Heisenberg also proceeded thus in his treatment of the rotator; however, it


remains undecided whether the approach applied there can be justified from


the standpoint of a consistent quantum mechanics.


The mathematical basis of Heisenberg?s treatment is the law of multiplication of quantum?theoretical quantities, which he derived from an


ingenious consideration of correspondence arguments. The development of


his formalism, which we give here, is based upon the fact that this rule of


multiplication is none other than the well?known mathematical rule of matrix multiplication. The infinite square array (with discrete or continuous


indices) which appears at the start of the next section, termed a matrix, is


a representation of a physical quantity which is given in classical theory as


2 a function of time. The mathematical method of treatment inherent in the


new quantum mechanics is thereby characterized through the employment


of matrix analysis in place of the usual number analysis.


Using this method, we have attempted to tackle some of the simplest


problems in mechanics and electrodynamics. A variational principle, derived from correspondence considerations, yields equations of motion for the


most general Hamilton function which are in closest analogy with the classical canonical equations. The quantum condition conjoined with one of the


relations which proceed from the equations of motion permits a simple matrix notation. With the aid of this, one can prove the general validity of the


law of conservation of energy and the Bohr frequency relation in the sense


conjectured by Heisenberg: this proof could not be carried through in its


entirety by him even for the simple examples which he considered. We shall


later return in more detail to one of these examples in order to derive a basis


for consideration of the part played by the phases of the partial vibrations in


the new theory. We show finally that the basic laws of the electromagnetic


field in a vacuum can readily be incorporated and we furnish substantiation


for the assumption made by Heisenberg that the squares of the absolute


values of the elements in a matrix representing the electrical moment of an


atom provide a measure for the transition probabilities. Chapter 1. Matrix Calculation


1. Elementary operations. Functions


We consider square infinite matrices, 2 which we shall denote by heavy


type to distinguish them from ordinary quantities which will throughout be


in light type, a(00) a(01) a(02) · · · a(10) a(11) a(12) · · · a = (a(nm)) = a(20) a(21) a(22) · · · .








2 Further details of matrix algebra can be found, e.g., in M. Bocher, Einf¨


uhrung in die



ohere Algebra (translated from the English by Hans Beck; Teubner, Leipzig, 1910) § 22?


25; also in R. Courant and D. Hilbert, Methoden der mathematischen Physik 1 (Springer,


Berlin, 1924) Chapter I. 3 Equality of two matrices is defined as equality of corresponding components:


a=b means a(nm) = b(nm). (1) Matrix addition is defined as addition of corresponding components:


a=b+c means a(nm) = b(nm) + c(nm). (2) Matrix multiplication is defined by the rule ?rows times columns?, familiar


from the theory of determinants:


a = bc means a(nm) = ?


X b(nk)c(km). (3) k=0 Powers are defined by repeated multiplication. The associative rule applies


to multiplication and the distributive rule to combined addition and multiplication:


(ab)c = a(bc);




a(b + c) = ab + ac. (5) However, the commutative rule does not hold for multiplication: it is not in


general correct to set ab = ba. If a and b do satisfy this relation, they are


said to commute.


The unit matrix defined by



?nm = 0 for n 6= m,




1 = (?nm ),


?nm = 1


has the property


a1 = 1a = a. (6a) The reciprocal matrix to a, namely a?1 , is defined by3


a?1 a = aa?1 = 1 (7) As mean value of a matrix a we denote that matrix whose diagonal elements


are the same as those of a whereas all other elements vanish:




a = (?nm a(nm)). (8) As is known, a?1 is uniquely defined by (7) for finite square matrices when the determinant A of the matrix a is non?zero. If A = 0 there is no matrix to a.


3 4 The sum of these diagonal elements will be termed the diagonal sum of the


matrix a and written as D(a), viz.




D(a) =






n From (3) it is easy to prove that if the diagonal sum of a product


y = x1 x2 · · · xm be finite, then it is unchanged by cyclic rearrangement of


the factors:


D(x1 x2 · · · xm ) = D(xr xr+1 · · · xm x1 x2 · · · xr?1 ). (10) Clearly, it suffices to establish the validity of this rule for two factors.


If the elements of the matrices a and b are functions of a parameter t,






d X


a(nk)b(km) =






+ a(nk)b(km)},


dt n


k or from the definition (3): d




(ab) = ab


? + ab.


dt (11) Repeated application of (11)




(x1 x2 · · · xn ) = x? 1 x2 · · · xn + x1 x? 2 · · · xn + · · · + x1 x2 · · · x? n .


dt (110 ) From the definitions (2) and (3) we can define functions of matrices. To begin


with, we consider as the most general function of this type, f (x1 , x2 · · · xm ),


one which can formally be represented as a sum of a finite or infinite number


of products of powers of the arguments xk ; weighted by numerical coefficients. Through the equations


f1 (y1 , · · · yn ; x1 , · · · xn ) = 0,




fn (y1 , · · · yn ; x1 , · · · xn ) = 0 (12) we can then also define functions yl (x1 , . . . xn ); namely, in order to obtain


functions yl ; having the above form and satisfying equation (12), the yl (


need only be set in form of a series in increasing power products of the xk and


the coefficients determined through substitution in (12). It can be seen that


one will always derive as many equations as there are unknowns. Naturally,


the number of equations and unknowns exceeds that which would ensue


5 from applying the method of undetermined coefficients in the normal type of


analysis incorporating commutative multiplication. In each of the equations


(12), upon substituting the series for the yl ; and gathering together like


terms one obtains not only a sum term C 0 x1 x2 but also a term C 00 x2 x1 and


thereby has to bring both C 0 and C? to vanish (e.g., not only C 0 + C?). This


is, however, made possible by the fact that in the expansion of each of the


yl , two terms x1 x2 and x2 x1 appear, with two available coefficients. 2. Symbolic differentiation


At this stage we have to examine in detail the process of differentiation


of a matrix function, which will later be employed frequently in calculation.


One should at the outset note that only in a few respects does this process


display similarity to that of differentiation in ordinary analysis. For example,


the rules for differentiation of a product or of a function of a function here


no longer apply in general. Only if all the matrices which occur commute


with one another can one apply all the rules of normal analysis to this










xlm = xl1 xl2 . . . xls .






m=1 We define s












?lr k




r=1 m=r+1 xlm m=r?1


Y xlm , m=1  ?jk = 0 for


?kk = 1. j 6= k, (14) This rule may be expressed as follows: In the given product, one regards all factors as written out individually (e.g., not as x31 x22 , but as


x1 x1 x1 x2 x2 ); one then picks out any factor xk and builds the product


of all the factors which follow this and which precede (in this sequence).


The sum of all such expressions is the differential coefficient of the product


with respect to this xk .


The procedure may be illustrated by some examples:


y = xn , dy


= nxn?1


dx y = xn1 xm


2 , ?y


n?2 m




= x1n?1 xm


x2 x1 + · · · + xm




2 + x1


2 x1


?x1 y = x21 x2 x1 x3 , ?y


= x1 x2 x1 x3 + x2 x1 x3 x1 + x3 x21 x2 .




6 If we further stipulate that






?(y1 + y2 )












?xk (15) then the derivative ?y/?x is defined for the most general analytical functions




With the above definitions, together with that of the diagonal sum (9),


there follows the relation










?xk (nm)


?xk (16) on the right?hand side of which stands the mn?component of the matrix


?y/?xk . This relation can also be used to define the derivative ?y/?xk . In


order to prove (16), it obviously suffices to consider a function y having the


form (13). From (14) and (3) it follows that












?lr k




?xk (mn)




r=1 xlp (?p ?p+1 ) p=r+1 ?r+1 = m, r?1


Y xlp (?p ?p+1 ); (17) p=1 ?s+1 = ?1 , ?r = n. On the other hand, from (3) and (9) ensues








X r?1








?lr k


xlp (?p ?p+1 )




?xk (mn)




r=1 p=1 ?1 = ?s+1 , xlp (?p ?p+1 ); (170 ) p=r+1 ?r = n, ?r+1 = m. Comparison of (17) with (17?) yields (16).


We here pick out a fact which will later assume importance and which


can be deduced from the definition (14): the partial derivatives of a product


are invariant with respect to cyclic rearrangement of the factors. Because of


(16) this can also be inferred from (10).


To conclude this introductory section, some additional description is


devoted to functions g(pq) of the variables. For


y = ps q r (18) it follows from (14) that


s?1 r?1 ?y X s?1?l r l




q p,




?p ?y X r?1?j s j




p q .






j=1 l?1 7 (180 ) The most general function g(pq) to be considered is to be represented


in accordance with § 1 by a linear aggregate of terms




Y z= (psj qrj ). (19) j=1 With the abbreviation


pl = k


Y (psj qrj ) l?1


Y (psj qrj ), (20) j=1 j=l+1 one can write the derivatives as


k sl ?1


?z = P P


psl ?1?m qrl pl pm ,


?p l=1 m=0


k rP


l ?1


P ?z =


?q l=1 m=0 qrl ?1?m pl psl qm . (21) From these equations we find an important consequence. We consider the












d1 = q




q, d2 = p








?q ?q


?p ?p


From (21) we have


d1 = k


X (qrl Pl psl ? Pl psl qrl ), k


X (psl qrl Pl ? Pl psl qrl ). l=1 d2 = l=1 and thus it follows that d1 + d2 = k




l=1 (psl qrl Pl ? Pl psl qrl ). Herein the second member of each term cancels the first member of the


following, and the first and last member of the overall sum also cancel, so






d1 + d2 = 0.


8 Because of its linear character in z, this relation holds not only for expressions z having the form (19), but indeed for arbitrary analytical functions




In concluding this brief survey of matrix analysis, we establish the following rule: Every matrix equation


F(x1 , x2 , . . . xr ) = 0


remains valid if in all the matrices xj one and the same permutation of all


rows and columns is undertaken. To this end, it suffices to show that for


two matrices a, b which thereby become transposed to a0 , b0 , the following


invariance conditions apply:


a0 + b0 = (a + b)0 , a0 b0 = (ab)0 , wherein the right?hand sides denote those matrices which are formed from


a + b and ab respectively by such an interchange.


We set forth this proof by replacing the procedure of permutation by


that of multiplication with a suitable matrix.5


We write a permutation as








0 1 2 3 ...






k0 k1 k2 k3 . . .


and to this we assign a permutation matrix,



1 when m = kn


p = (p(nm)), p(nm) =


0 otherwise.


The transposed matrix to p is




? = (?




4 p?(nm) =  1 when n = km


0 otherwise. More generally, for function of r variables, one has













xr = 0.






r 5 The method of proof adopted here possesses the merit of revealing the close connection


of permutations with an important class of more general transformations of matrices. The


validity of the rule in question can however also be established directly on noting that in


the definitions of equality, as also of addition and multiplication of matrices, no use was


made of order relationships between the rows or the columns. 9 On multiplying the two together, one has










p(km)) = (?nm ) = 1,


k since the two factors p(nk) and p?(km) differ from zero simultaneously only


if k = kn = km , i.e., when n = m. Hence p? is reciprocal to p:




? = p?1 .


If now a be any given matrix, then




pa = (


p(nk)a(km)) = (a(kn , m))


k is a matrix which arises from the permutation




ap?1 = ( X  n


kn  of the rows of a and a(mk)?


p(km)) = (a(n, km )) k is the matrix arising from permutation of the columns of a. One and the


same permutation applied both to the rows and the columns of a thus yields


the matrix


a0 = pap?1 .


Thence follows directly


a0 + b0 = p(a + b)p?1 = (a + b)0 ,


= (ab)0


a0 b0 = pabp?1


which proves our original contention.


It is thus apparent that from matrix equations one can never determine


any given sequence or order of rank of the matrix elements. Moreover, it


is evident that a much more general rule applies, namely that every matrix


equation is invariant with respect to transformations of the type


a0 = bab?1 ,


where b denotes an arbitrary matrix. We shall sec later that this does not


necessarily always apply to matrix differential equations. 10 Chapter 2. Dynamics


3. The basic laws


The dynamic system is to be described by (lie spatial coordinate q and


the momentum p, these being represented by matrices


q = (q(nm)e2?i?(nm)t , p(p(nm)e2?i?(nm)t ). (24) Here the ?(nm) denote the quantum-theoretical frequencies associated with


transitions between states described by the quantum numbers n and m. The


matrices (24) are to be Hermitian, e.g., on transposition of the matrices,


each element is to go over into its complex conjugate value, a condition


which should apply for all real t. We thus have


q(nm)q(mn) = |q(nm))|2 (25) ?(nm) = ??(mn). (26) and


If q be a Cartesian coordinate, then the expression (25) is a measure of the


probabilities6 of the transitions n ? m.


Further, we shall require that


?(jk) + ?(kl) + ?(lj) = 0. (27) This can be expressed together with (26) in the following manner: there


exist quantities Wn such that


h?(nm) = Wn ? Wm . (28) From this, with equations (2), (3), it follows that a function g(pq) invariably


again takes on the form


g = (g(nm)e2?i?(nm)t ) (29) and the matrix (g(nm)) therein results from identically the same process


applied to the matrices (q(nm)), (p(nm)) as was employed to find g from


q, p. For this reason we can henceforth abandon the representation (24) in


favour of the shorter notation


q = (q(nm)),


6 p = (p(nm)). In this connection see §8. 11 (30) For the time derivative of the matrix g = (g(nm)), recalling to mind (24)


or (29), we obtain the matrix


g? = 2?i(?(nm)g(nm)). (31) If ?(nm) 6= 0 when n 6= m, a condition which we wish to assume, then the


formula g? = 0 denotes that g is a diagonal matrix with g(nm) = ?nm (nn).


A matrix differential equation g? = a is invariant with respect to that


process in which the same permutation is carried out on rows and columns


of all the matrices and also upon the numbers Wn In order to realize this,


consider the diagonal matrix


W = (?nm Wn ).




Wg = ( X ?nk Wn g(km)) = (Wn g(nm)), k gW = ( X g(nk)?km Wk ) = (Wm g(nm)), k i.e., according to (31),


g? = 2?i




((Wn ? Wm )g(nm)) =


(Wg ? gW).




h If now p be a permutation matrix, then the transform of W,


W0 = pWp?1 = (?nk m Wnk )


is the diagonal matrix with the permuted Wn along the diagonal. Thence


one has






? ?1 =


(W0 g0 ? g0 W0 ) = g? 0 ,




where g0 = pgp?1 and g? 0 denotes the time derivative of g0 constructed in


accordance with the rule (31) with permuted Wn .


The rows and columns of g? thus experience the same permutation as


those of g, and hence our contention is vindicated.


It is to be noted that a corresponding rule does not apply to arbitrary


transformations of the form a0 = bab?1 since for these W0 is no longer a


diagonal matrix. Despite this difficulty, a thorough study of these general


transformations would seem to be called for, since it offers promise of insight 12 into the deeper connections intrinsic to this new theory: we shall later revert


to this point.7


In the case of a Hamilton function having the form


1 2


p + U(q)




we shall assume, as did Heisenberg, that the equations of motion are just of


the same form as in classical theory, so that using the notation of §2 we can


write: 1 p,




q? = ?H m




(32) p? = ? ?H = ? ?U . ?q




We now use correspondence considerations to try more generally to elucidate the equations of motion belonging to an arbitrary Hamilton function


H(pq). This is required from the standpoint of relativistic mechanics and in


particular for the treatment of electron motion under the influence of magnetic fields. For in this latter case, the function H cannot in a Cartesian


coordinate system any longer be represented by the sum of two functions of


which one depends only on the momenta and the other on the coordinates.


Classically, equations of motion can be derived from the action principle


H= Zt1 t0 Zt1


Ldt = {pq? ? H(pq)}dt = extremum. (33) t0 If we now envisage the Fourier expansion L substituted in (33) and the time


interval t1 ? t0 taken sufficiently large, we find that only the constant term


of L supplies a contribution to the integral. The form which the action


principle thence acquires suggests the following translation into quantum






L(kk) is to be made an extremum:


The diagonal sum D(L) =


k D(L) = D(pq? ? H(pq)) = extremum, (34) namely, by suitable choice of p and q, with ?(nm) kept fixed.


Thus, by setting the derivatives of D(L) with respect to the elements of


p and q equal to zero, one obtains the equations of motion


2?i?(nm)q(nm) =


7 ?D(H)




dp(nm) Cf. the continuation of this work, lo lie published forthwith. 13 2?i?(mn)p(mn) ?D(H)




?q(mn) From (26), (31) and (16) one observes that these equations of motion


can always be written in canonical form, q? = ?H , ?p


(35) p? = ? ?H .


?q For the quantization condition, Heisenberg employed a relation proposed


by Thomas8 and Kuhn.9 The equation


J= I Z1/?


pdq =






0 of ?classical? quantum theory can, on introducing the Fourier expansions of


p and q,










p? e2?i?? t , q =


q? e2?i?? t ,




? =?? ? =?? be transformed into 1 = 2?i ?


X ? ? =?? ?


(q? p?? ).


?J (36) If therein one lias p = mq,


? one can express the p? in terms of q? and


thence obtain that classical equation which on transformation into a difference equation according to the principle of correspondence yields the formula


of Thomas and Kuhn. Since here the assumption that p = mq? should be


avoided, we are obliged to translate equation (36) directly into a difference




The following expressions should correspond:




X ? =?? ? ?


(q? p?? )


?J with ?


1 X


q(n + ?, n)p(n, n + ? ) ? q(n, n ? ? )p(n ? ?, n));


h ? =??




9 W. Thomas, Naturwiss. 13 (1925) 627.


W. Kuhn, Zs. f. Phys. 33 (1925) 408. 14 where in the right-hand expression those q(nm), p(nm) which take on a


negative index are to be set equal to zero. In this way we obtain the quantization condition corresponding to (36) as




k (p(nk)q(kn) ? q(nk)p(kn)) = h




2?i (37) This is a system of infinitely many equations, namely one for each value


of n.


In particular, for p = mq? this yields


X ?(kn)|q(nk)|2 = k h




8? 2 m which, as may easily be verified, agrees with Heisenberg?s form of the quantization condition, or with the Thomas-Kuhn equation. The formula (37)


has to be regarded as the appropriate generalization of this equation.


Incidentally one sees from (37) that the diagonal sum D(pq) necessarily becomes infinite. For otherwise one would have D(pq) ? D(qp) = 0


from whereas (37) leads to D(pq) ? D(qp) = ?. Thus the matrices under


consideration arc never finite.10 4. Consequences. Energy-conservation and frequency laws


The content of the preceding paragraphs furnishes the basic rules of


the new quantum mechanics in their entirety. All other laws of quantum


mechanics, whose general validity is to be verified, must be derivable from


these basic tenets. As instances of such laws to be proved, the law of energy


conservation and the Bohr frequency condition primarily enter into consideration. The law of conservation of energy states that if H be the energy,


? = 0, or that H is a diagonal matrix. The diagonal elements H(nn)


then H


of H are interpreted, according to Heisenberg, as the energies of the various


states of the system and the Bohr frequency condition requires that


h?(nm) = H(nn) ? H(mm),




Wn = H(nn) + const.




Further, they do not belong to the class of ?bounded? infinite matrices hitherto almost


exclusively investigated by mathematicians. 15 We consider the quantity


d = pq ? qp.


From (11), (35) one finds


?H ?H


?H ?H


d? = pq


? + pq? ? qp


? ? qp? = q


















Thus from (22), (23) it follows that d? = 0 and d is a diagonal matrix. The


diagonal elements of d are, however, specified just by the quantum condition


(27). Summarizing, we obtain the equation


pq ? qp = h




2?i (38) on introducing the unit matrix 1 defined by (6). We term the equation (38)


the ?stronger quantum condition? and base all further conclusions upon it.


From the form of this equation, we deduce the following: If an equation


(A) be derived from (38), then (A) remains valid if p be replaced by q and


simultaneously h by ?h. For this reason one need for instance derive only


one of the following two equations from (38), which can readily be performed


by induction


h n?1




pn q = qpn + n








h n?1




(390 )


qn p = pqn ? n






We shall now prove the energy-conservation and frequency laws, as expressed above, in the first instance for the case


H = H1 (p) + H2 (q).


From the statements of §1, it follows that we may formally replace H1 (p)


and bf H2 (q) by power expansions






as ps , H 2 =


bs qs .


H1 =


s s Formulae (39) and (39?) indicate that


h ?H ,


Hq ? qH = 2?i


?p h ?H . Hp ? pH = ? 2?i




16 (40) Comparison with the equations of motion (35) yields q? = 2?i (Hq ? qH). h p? = 2?i (Hp ? pH). h H for brevity, one has


Denoting the matrix Hg ? gH by g H H H = b + a ; ab a b from which generally for g = g(pq) one may conclude that 2?i H 2?i


g? =


(Hg ? gH). =...


Solution details:

This question was answered on: Jan 30, 2021

PRICE: $15 (25.37 KB)

Buy this answer for only: $15

This attachment is locked

We have a ready expert answer for this paper which you can use for in-depth understanding, research editing or paraphrasing. You can buy it or order for a fresh, original and plagiarism-free solution (Deadline assured. Flexible pricing. TurnItIn Report provided)

Pay using PayPal (No PayPal account Required) or your credit card . All your purchases are securely protected by .

About this Question






Jan 30, 2021





We have top-notch tutors who can do your essay/homework for you at a reasonable cost and then you can simply use that essay as a template to build your own arguments.

You can also use these solutions:

  • As a reference for in-depth understanding of the subject.
  • As a source of ideas / reasoning for your own research (if properly referenced)
  • For editing and paraphrasing (check your institution's definition of plagiarism and recommended paraphrase).
This we believe is a better way of understanding a problem and makes use of the efficiency of time of the student.


Order New Solution. Quick Turnaround

Click on the button below in order to Order for a New, Original and High-Quality Essay Solutions. New orders are original solutions and precise to your writing instruction requirements. Place a New Order using the button below.


Order Now