alexa
Reach Us +1-504-608-2390
Cyclosporine Therapy for Kidney Transplant: What is New for an Old-Fashioned Therapy? | OMICS International
ISSN: 2161-0495
Journal of Clinical Toxicology

Like us on:

Make the best use of Scientific Research and information from our 700+ peer reviewed, Open Access Journals that operates with the help of 50,000+ Editorial Board Members and esteemed reviewers and 1000+ Scientific associations in Medical, Clinical, Pharmaceutical, Engineering, Technology and Management Fields.
Meet Inspiring Speakers and Experts at our 3000+ Global Conferenceseries Events with over 600+ Conferences, 1200+ Symposiums and 1200+ Workshops on Medical, Pharma, Engineering, Science, Technology and Business
All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Cyclosporine Therapy for Kidney Transplant: What is New for an Old-Fashioned Therapy?

José Alberto Pedroso1* and Franco Citterio2

1Hospital de Clínicas de Porto Alegre, Nephrology Department, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil

2Policlinico Agostino Gemelli, Università Cattolica del Sacro Cuore, Rome, Italy

*Corresponding Author:
José Alberto Pedroso
Hospital de Clínicas de Porto Alegre
Nephrology Department, Universidade Federal
do Rio Grande do Sul, Porto Alegre, Brazil
Tel: +555195312137
E-mail: [email protected]

Received date: June 19, 2015 Accepted date: October 05, 2015 Published date: October 15, 2015

Citation: Pedroso JA, Citterio F (2015) Cyclosporine Therapy for Kidney Transplant: What is New for an Old-Fashioned Therapy?. J Clin Toxicol 5:272. doi:10.4172/2161-0495.1000272

Copyright: © 2015 Pedroso JA, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Visit for more related articles at Journal of Clinical Toxicology

Abstract

Calcineurin inhibitors have been the mainstay of immunosuppression for the last decades and nowadays still remain as relevant drugs in the setting of solid organ transplantation. Nevertheless, the balance between excess of immunosuppression with cyclosporine (leading to increased risk of metabolic complications, nephrotoxicity or cancer), or its insufficiency (with augmented risk of rejection), is even so a challenge in the clinical practice. The aim of this paper is to review the pharmacokinetic and pharmacodynamic evolution of this landmark drug. We also discuss what could we expect on this CNI drug in this decade.

Keywords

Cyclosporine; Therapeutic drug monitoring; Pharmacokinetics; Pharmacodynamics; Immunosuppressors

Introduction

Immunosuppressive therapy in kidney transplantation should provide a maximum of efficacy without toxicity [1-3]. Immunosuppressive agents used in maintenance therapy usually have a narrow therapeutic index and the range of exposure causing toxicity is very close to the optimal immunosuppressive exposure [3-5]. Calcineurin inhibitors (CNI, namely cyclosporine and tacrolimus) have been the cornerstone of modern immunosuppressive therapy. Both agents have a narrow therapeutic window, so therapeutic drug monitoring (TDM) is critical to optimize the immunosuppressive treatment [1].

Cyclosporine (CsA) is a lipophylic cyclic polypeptidic immunosupressant that interferes in T cells activation, acting through an interaction with a target cytoplasmic protein, cyclophilin. The complex cyclosporine-cyclophilin inhibits the activation of the Ca++/calmodulin-activated phosphatase calcineurin. This prevents the nuclear translocation of a transcription factor (nuclear factor of activated T cells -NF-AT), inhibits activation of another transcription factor (NF-κB) and, by consequence, inhibits the expression of interleukin-2 (IL-2) gene suppressing T cell activation. IL-2 acts as an autocrine growth factor, inducing T cell proliferation, clonal expansion and other cytokine production [3,6]. After a single dose, there is an initial absorption of CsA with a peak level of blood concentrations (Cmax), during the first 2-3 hours. In the elimination phase, drug level falls, and the lowest level measured immediately before the administration of the next dose is trough concentration or C0 [3,5]. Therapeutic drug monitoring for cyclosporine microemulsion (CsA-ME) in clinical settings is often performed using C0 and/or C2 (after two hours of administration).

Cyclosporine undergoes extensive extravascular distribution, with a volume of distribution at steady state between 3 to 5 L/kg, with protein binding amount up to 90-98%. It crosses placenta membranes and distributes into human milk. The decay of blood concentration occurs in a biphasic curve, with a first elimination half-life (t1/2) of 1.2 h and terminal (t1/2) between 8.4 to 27 h. It metabolized especially by hepatic cytochrome P450-3A enzyme system, with primary biliary elimination. Around 6% is excreted in urine, with only 0.1% of dose as unchanged drug. Clearance is not significantly affected by chronic kidney failure or hemodialysis [6].

Chronopharmacokinetics studies report that CsA displays in the morning maximum blood concentration (Cmax) and area under the concentration-time curve (AUC). A higher clearance rate could be responsible for a daytime CsA mean half-life (t1/2) lower than the night-time t1/2. Due to this greater CsA oral biodisponibility during the daytime, morning blood samples might be ideal for therapeutic CsA monitoring [2].

Evolution of the Cyclosporine Immunosuppressive Strategy

In the last 30 years, cyclosporine has been extensively used in organ transplantation. In the very beginning (early 80’s), CsA was used in combination with steroids and became immediately clear that monitoring of blood levels was required to deal with the narrow therapeutic window of the drug. CsA levels have great inter/intra-individual pharmacokinetic variability. It became notorious that renal and liver toxicity could be reverted or reduced with dose adjustment. [7,8]. In the 90’s CsA was prevalently employed in combinations with steroids+azathioprine, or with mophetil mycophenolate. The doses of CsA used in these combinations and targets levels were lower, but TDM was still required. In the 2000s the concept of CNI minimization was introduced and CsA sparing protocols were designed in combinations with m-TORs: everolimus and sirolimus. With the association of mTORs, CsA exposure was greatly reduced by 60 per cent. Also in this case, the TDM of blood levels was critical to take advantage of the synergistic combination avoiding the potential toxicity.

Possible Cyclosporine Monitoring Strategies

Monitoring CsA trough blood levels C0

Proposed TDM initially considered the trough level (C0) monitoring. The first formulation of oral CsA (Sandimmune®), a corn-oil based preparation, immediately showed wide inter- and intrasubject pharmacokinetic variability, mostly during the absorption phase, due to bile drug solubilization, presence of food, gastrointestinal transit time [3], circadian rhythm [2], or inhibition of CYP3A4 metabolism. All these factors significantly affected CsA bioavailability, C0 and the area under the curve concentration over the time (AUC0-12-24). A more predictable and stable absorption profile was achieved with the introduction of an oral microemulsion (ME) formulation of CsA (Neoral®) [6]. Nevertheless, there are descriptions that food-drug interactions can also interfere in CsA-ME pharmacokinetics, sometimes with risk of subtherapeutic exposure, suggesting an appropriate control of free-of-food interval before ingestion of CsA preparations. Unfortunately also with the more stable oral microemulsion (ME) formulation of CsA (Neoral®), the correlations of C0 through blood levels with clinical events were poor. Many studies demonstrated that measured C0 could not predict risk of rejection or toxicity. [3,8,9]. C0 neither showed a good correlation with the whole AUC0-12, leading to consider that C0 was not a good surrogate parameter for the total CsA exposure [3,5,8-10], nor even within the period of greatest variability among patients (AUC0-4, see below) [1]. In fact, C0 CsA only poorly correlates with AUC0-4 [11-13], do not demonstrated to be a sensitive marker for risk of rejection [11-14] and was only a poor predictor of the 24 hours CsA exposure [14].

Monitoring csa with the area under the curve

Kahan [4], using the full 12-hour area under the concentration-time curve (AUC0-12), first proposed a pharmacokinetic approach based on drug exposure. The extent of monitoring varies from (I) repeated measuring of single pre-dose CsA to (II) performing CsA profiles with up to 14 time-points over a 12- or 24-h period [10]. Researches in renal transplant patients have shown that the AUC based on 12 h pharmacokinetics is a sensitive predictor for acute rejection and graft survival at 1 year. Besides the most reliable pharmacokinetic (PK) parameter for CsA dosing is AUC0-12, inconvenience of this approach remains on its cost and difficulty to perform on a routine basis [5,8,15]. The proposed measure of limited or abbreviated AUCs is a simple monitoring method that improves clinical outcomes. Bowles et al. considered an abbreviated AUC0-8, limited to 5-time-point profiles (C0,C2,C4,C6,C8) and demonstrated that it could significantly discriminate rejection from non-rejection, whereas C0 did not have such discriminating cut-off values. Adjustments of CsA dosages were made on basis of profiles rather than pre-dose levels alone [10]. Inter-patient variability in CsA blood levels is highest within the first 4 h post-dose, when CsA absorption predominates. This consideration provided theoretical basis for introduction of the absorption-profiling concept, a further simplification of pharmacokinetic therapeutic drug monitoring of CsA [8]. Many studies have shown that the CsA AUC exposure during the first 4 hours (AUC0-4) is well correlated with the whole 0-12 h time interval (AUC0-12), and can be alternatively performed [16], specially the in CsA microemulsion formulation. CsA exposure within the first 4 h (AUC0-4) after administration of cyclosporine microemulsion (CsA-ME) is also predictive of acute cellular rejection within the first 3 months post-transplantation [5,13,16]. Even partial AUC determinations, however, are more adequate in experimental than in clinical settings.

Monitoring CsA C2

Cyclosporine blood levels obtained 2 hours after the administration (C2) is the most reliable parameter of optimal cyclosporine exposure to monitor adult de novo renal transplant patients [1]. C2 determinations reflect the overall CsA exposure [1,8,17], in a age-independent fashion [18]. C2 levels closely correlate with the absorption phase, and considered alone, is the best single time-point predictor of AUC0-12 and AUC0-4 [1,3,5,8,17,19], and C2 values have been shown to correlate with CsA inhibition of calcineurin [1,12] and risk of acute rejection.

A prospective multicentre trial (MO2ART) [13,20,21] in which CsA-ME dose was adjusted based on C2 level, showed that CsA clearance accelerates between months 3 and 12 post-transplant, resulting in lower C0 levels for a given exposure (as measured by C2). The C0 monitoring, consequently, may progressively underestimate CsA exposure during the first year post-transplant, whereas C2 monitoring improves individualized CsA-ME treatment in early post-transplant phase and beyond month 3 [13,21].

Despite the findings about C0 favouring C2 monitoring, C0 it is still used in many centers because adoption of C2 strategy requires logistical changes in transplant units, with a narrow time interval of collection admitted post-dose (2 h ± 15 min) [13,22]. The sampling collected two hours after ingestion is usually in the peak of absorption, and is well correlated with CsA AUC0-12 [3,8]. The AUC0-12 was only a slightly better predictor of CAN than C2 [14]. Some studies have confirmed than AUC0-4 is more predictive of rejection than the AUC0-12, and that C2 is the best single point correlate of AUC0-4. [3,22,23].

Long-Term Monitoring - Is C2 Enough?

Long-term graft loss was associated with CsA C2, but not with C0 levels [5,24]. Despite the facts that C2 time point incorporates a measure of absorption, distribution and possibly elimination, reflecting CsA exposure throughout the first 4 hours post-dose, other evidences advocate in pro of C2 use in the clinical practice. In pediatric populations, retrospective analysis of chronic rejection episodes had shown high correlation index between sparse C2 and development of chronic rejection [25]. Once that C2 monitoring also can detect CsA overexposure in maintained patients, there are evidences of the benefit of dose reduction in terms of prevent/manage nephrotoxicity, improving short-term renal function and blood pressure control [22] or even predicting diastolic dysfunction in kidney recipients without impairment of their contractile performance [26]. Higher C2 levels were associated with better renal function and histological structure in protocol kidney graft biopsies performed at 1 year after transplant [9].

Data about utility of C2 levels in stable kidney transplant patients after one year post-transplantation are controversial. Some works did not find C2 to be helpful in identifying patients at risk of rejection, but may be useful to detect over-immunosuppression and improve long-term allograft survival further by reducing CsA nephrotoxicity [17,27]. Monitoring of CsA dose by C2 allow the identification of variable number of overexposed patients, but did not showed a better reproducibility of C2 concentrations with respect of C0, with a high intra-patient variability, considering a coefficient of reproducibility in consecutive determinations [19,27].

Other data support the use of C2 in long-term kidney transplant patients. A retrospective study showed a C2 range associated with a low level of chronic renal allograft dysfunction over a mean follow-up period of more than three years [14]. The same group conducted a prospective trial of 110 patients more than 12 months post-transplant to evaluate the clinical utility of C2 monitoring with a similar follow-up and identified that low C2 levels are associated with an increased risk of chronic allograft nephropathy (CAN), and high C2 levels overexpose subjects to reversible increasing creatinine, tremor and hirsutism, advocating a target range that balance these undesired effects. Interestingly, complimentary information about non-selected patients (excluded from trial due to recent creatinine alterations or recent rejection) suggest that once chronic damage is present, as a result of under- or overexposure to CsA, late adjustment of CsA is of no benefit [28].

There is a small proportion of patients who show markedly delayed absorption of CsA (delayed time to Cmax). This “slow absorbers” include patients with a delayed gastric empty (e.g., diabetics). If acutely a patient shows a low C2 on day 3 or later, measurement of a later time point (C6) should be recommended. In true low absorbers both C2 and C6 will be low, but in slow absorbers, C6 is likely to be higher than C2, which drives a risk in increasing dose based on a low C2 and produce toxicity [1]. Einecke et al., published a prospective analysis about predictive value of dose-proportional relative CsA absorption (expressed as dose-adjusted levels, or C2/dose) in 41 recent de novo renal transplant patients treated with CsA-ME plus steroids, sodium mycophenolate and basiliximab, with follow-up within 6 months. Receiver-operating characteristic (ROC) analysis did not detect discriminative C2 values as a predictor of rejection (biopsy-proven) or toxicity (dose-responsive). A substantial proportion if patients (30%) showed poor/ low absorption (characterized as low C2 levels and high C0 levels), and in this setting, C2 monitoring alone does not detect toxicity in poor and/or slow absorbers [17].

It is unknown if the high correlation between C2 levels and CsA AUC0-12 showed in a short-term post-transplant setting remains on after transplantation. Long-term described stepwise CNI reduction strategies to manage chronic CNI toxicity (histologically-proved or preemptive), in different patient age scenarios, may demand a careful monitorization of CsA C0 and C2, AUC0-12 (usually by Bayesian estimators), as well as other through concentrations of involved immunosuppressors. As result, significant reduction of cyclosporine exposure can be achieved, but yet assuring therapeutic levels. Results show improvement in renal function without an excess risk of rejection. In cyclosporine-sparing strategies, with substantial reduction of drug exposure (as much as 50%), it seems to be safer to proceed a more precise pharmacokinetic technique than only C2 [29,30].

Pharmacodynamic Monitoring: Is It the Future?

Conventional therapeutic drug monitoring of blood immunosuppressant levels may not necessarily predict the pharmacologic Effects of immune cells [31]. Specific pharmacodynamic assays to assess the degree of calcineurin inhibition were developed to detect serum production of specific cytokines. This can be performed by enzyme linked immunosorbent assay (ELISA) or flow cytometric assay, specially Interleukin 2 (IL-2), but also Interferon γ (IFNγ) and Tumoral necrosis factor α (TNFα). These are products of nuclear factor of activated T cells (NFAT)-regulated genes, whose downregulation of dephosforilation is due to the inhibition of calcineurin [5,31,32]. It can be observed that the highest the cyclosporine blood concentration (coincidently with C2 peak), the greatest is the inhibition of calcineurin [33].

Alternatively, expression of cytokines mRNA (by reverse transcription Polymerase chain reaction-rt-PCR) can be measured in whole blood; or the calcineurin-phosphatase activity (by different methods of measure - High performance liquid chromatography (HPLC)+ultraviolet, radioactive, spectophotometric). The relationship between the results and immunosuppressive state, however, remain controversial. Pharmacodynamic studies have given experimental evidences demonstrating that the maximum inhibition of calcineurin in lymphocytes coincides with CsA peak of uptake (C2) [5,9,12,31]. Despite these findings and the fact that TDM for CNI are routinely performed, the level of drug obtained by the assay do not necessarily correlate neither with the individual drug efficacy nor the net state of immunosuppression in an individual basis (e.g., considering the specific organ, time post-transplant, interaction with other therapies) [5,34,35].

The non-specific pharmacodynamic biomarkers reflect semiquantitative overall activity of immune system, with a theoretical advantage of measuring the net state of immunosuppression, not only an specific immunosuppressor activity or blood level. They involve assays with T-cell proliferation; detection of T-cell surface antigens (CD25, CD71, CD154) or T-cell subsets (C4D+CD25+); detection of donor-specific HLA antibodies (DSA); detection of soluble CD30, but their discussion are beyond the scope of this paper [5].

A commercial available immune cell function test (ImmuKnow, Cylex™) measures the amount of ATP of CD4+ cells, which have correlation with lymphocyte activation and clonal multiplication ability. Briefly, this is obtained with some steps that include incubating heparinized whole blood overnight with a mitogen (phytoemagglutinin), separating the focused immunoselected cells by a magnetic strategy, washing-out the undesired cells, lysing and reacting with the exposed intracellular ATP from CD4+ T cells, which is measured by bioluminescence [34-36]. The result is informed as units of ATP (in ng/mL), which inform the tendency of an elevated risk of infection. In case of low ATP amounts, it indicates high state of immunosuppression, with risk of active viral replication. In the other extremity, in case of high ATP amounts, it would suggest an elevated risk of rejection, due to under-regulated inhibition of T-cell activation (indicating low state of immunosuppression) [35,37].

Another insight among pharmacokinetics and pharmacodynamics was aggregated by the evidence that active transport is a major determinant in the inter- and intraindividual variability, playing a role in first-pass metabolism and hepatic elimination [38]. In this setting, the efflux is mediated by intestinal p-glycoprotein (reducing biodisponibility and altering pharmacokinetics of CsA, reducing the rate of absorption and Cmax). In hepatocytes, uptake transporters [(in sinusoidal membranes, the solute-carrier (Slc) family, specially Organic Anion-Transporting Polypeptide (OATP), anion (OAT) or cation (OCT) subfamilies] and efflux transporters [in billiary membranes, namely ATP-binding cassete (ABC) transporters ABCB1(p-glycoprotein), and ABCC2 (MRP-2)] regulate the amount of solute exposed to drug metabolizing enzymes. However, nowadays only there are evidence of efflux transporters for immunosuppressants, not active hepatic uptake. P-glycoprotein also can be involved in distribution of drugs into brain, and perhaps with neurotoxicity effects of CNI. Drug interactions can also affect metabolism, as substrates, and the same time inhibiting or stimulating hepatic enzymes as CYP3A [38].

The pharmacogenomic dedicates to study the polymorphisms of transporters that can be the cause for interindividual pharmacokinetic variability. In most, they are single nucleotide polymorphisms (SNP), but only few results, at the moment, sustain some influence of specific SNP in CNI pharmacokinetics (specially involving an ABCB1 SNP with cyclosporine, which could be associated with ethnic low immunosuppressant exposure due to alterations in p-glycoprotein; the same about CYP3A4 and, specially, CYP3A5 with tacrolimus). Age and gender can also interfere in activity of CYP3A. ABCB1 expression can be changed by steroids and thyroid hormones [38].

Food, herbs or fruits can also interfere in metabolism. [38,39]. It is well known the influence of grapefruit in modifying the CsA biodisponibility, due to inhibiting pre-systemic metabolism by CYP3A4 located in the enteric mucosa, and/or via the P-glycoprotein mediated decreased transport of CsA back from enterocytes into the gut lumen, with augmentation of AUC [38,40]. In the opposite way, the red wine and red grape juice producing inhibition of P-glycoprotein intestinal efflux by polyphenolic compounds present in red grape juices, reducing CsA-ME absorption with considerable decrease of Cmax and AUC [41]. Even some diseases can affect metabolism (e.g., virus C hepatitis decreasing activity of CYP3A4 and suppressive inflammatory cytokines activity over the uptake and efflux hepatic transporters, which leads to lower metabolism and higher CsA through serum levels) [38,42].

Pharmacodynamic strategies determines biological drug efficacy in vivo, and are a promising tool to support clinical decisions on the dose and type of immunosuppressive drug, but at this moment cannot be considered a definitive substitute of pharmacokinetic monitoring.

How to manage Cyclosporine toxicity?

Despite the results of improvement of early graft survival, due to a reduction in rejection, long-term follow-up shows that calcineurin inhibitors have been implicated in the development of nephrotoxicity. Histological findings of cyclosporine toxicity are extensive arteriolar hyalinosis and obliteration, frequently with presence of glomerular and interstitial fibrotic changes. The damage of micro- and macrovasculature leads to a progressive allograft dysfunction, associated with proteinuria and hypertension. Functional alterations can be present even previously to morphological changes [43-53]. Histological patterns can be usually associated in a greater or smaller grade with chronic vascular changes, but after reduction of exposure to cyclosporine, patients usually show a favorable response [54].

Chronic rejection (or chronic transplant nephropathy), usually due to insufficient immunosuppression, can also induce graft loss and shares histological patterns with cyclosporine nephropathy. These two histologically similar but pathophysiologically distinct entities used to be previously reported indistinctively as chronic allograft nephropathy (CAN), nevertheless the mechanisms of induction of this entity were just the opposite. CAN is frequently involved in allograft lifespan reduction. Cardiovascular mortality and CAN are the most important causes of graft loss after one year of transplantation. These somewhat similar patterns between chronic calcineurin nephrotoxicity and chronic rejection were clarified, specially after identification of pericapillar/glomerular C4d immunohistochemical staining and production of collagen 1 in cases of chronic rejection, absent in the nephrotoxicity [43].

The possibility of induction of chronic nephropathy lead to development of trials in which the aim was to reduce the calcineurin inhibitor (CNI) exposure, by different means: (1) a phased reduction of CNI doses without drug additions; (2) a de novo avoidance of cyclosporine or (3) substitution of cyclosporine to another calcineurin inhibitor; (4) sparing of calcineurin inhibitor by use of azatioprine, mycophenolic acid derivates or mTOR inhibitors [43].

Reduction of CNI doses

The proposed measure carries a risk of acute rejection induction, but some studies have not demonstrated that when performed in patients with stable renal function [43,55-59]. After first year, in a scalonated fashion, some groups achieved reductions up to 50% of the original daily dose [60], with benefits in renal function [54,60,61] and cardiovascular profile, improving blood pressure and lipid control [60,61], with only a modest increased risk of reversible rejections [61]. Knight and Morris in a systematic review point potential benefits of switching cyclosporine monitoring from C0 to C2, as a dose reduction and some economic benefit in overall immunosuppression, and potentially a reduction in risk of CAN; however, direct evidences are limited and of poor quality, specially because limited follow-up period in identified studies [62-64]. Against this strategy is specially the fact that there are only few evidences supporting that simple dose reduction without addition of another agent is enough to avoid chronic interstitial fibrosis/tubular atrophy (IF/TA) [43,54]. CAESAR study [55], a randomized controlled trial involving 536 patients, purposed three alternative protocols (early withdrawal of CsA, low-dose maintenance of CsA or standard dose of CsA), all including induction with daclizumab and use of MMF. In patients treated only with mycophenolate (MMF) and corticosteroids showed increased risk of acute rejection. No advantage was observed in terms of renal function, hypertension or lipids, in CsA low dose or CsA withdrawal arm. Complete withdrawal of CNIs under MMF and corticosteroids in the first year of transplantation seems to produce a clinically relevant increased risk of late acute rejection [56,59].

Switching to another Calcineurin Inhibitor

Some theoretical advantages of tacrolimus over cyclosporine have been reported, as a better cardiovascular risk profile among tacrolimus treated non-diabetic patients when compared to cyclosporine; this have been suggested and prompted investigations to substitute the second agent for the first, even if not necessarily reduced cardiovascular events were evidenced [43,44]. In fact, Afzali reviewed some studies with de novo transplants that received tacrolimus. A better renal function in early post-transplant, lower rejection rates, and a significantly higher graft survival, better lipid and cardiovascular profiles were shown among tacrolimus in non-diabetic patients. [43,45]. By the other hand, recent studies do not agree with these observations. A prospective study focused on pharmacoeconomics, comparing 134 tacrolimus versus 66 cyclosporine-treated patients in four-drugs protocols (both including basiliximab induction, an anti-metabolite and prednisone), stated that both CNIs are comparable in terms of safety (CMV infections, anti-hypertensive requirement and post-transplant diabetic incidence), efficacy and cost. [46]. When considering liver transplantations, another pharmacoeconomic, randomized trial involving 60 de novo transplants, comparing C2 Neoral versus C0 tacrolimus monitoring in a 1:1 fashion, concluded that these two strategies are at least equivalents in terms of safety, efficacy and cost after 1 year of follow-up [47].

Histological findings of chronic toxicity include medial hyaline deposits in afferent arterioles, interstitial fibrosis (“stripped” fashion) and tubular atrophy, followed by glomerular sclerosis if the injury factor persists. Both CNIs share patterns of toxicity, both without a well correlation between extension of lesions and serum through levels [62]. In terms of avoiding chronic allograft nephropathy, a lower degree of interstitial fibrosis was observed among biopsies performed in tacrolimus patients, and this can be at least partially due to a higher expression of genes encoding matrix components in the presence of cyclosporine, when compared tacrolimus [48,49]. These data, however, involved a small number of patients in 6 months protocol-biopsies and were not supported by retrospective, larger multicenter studies, with higher follow-up periods (2-3 years) [43,50,51].

Literature about switch of cyclosporine to tacrolimus to revert chronic allograft biopsy-proven nephropathy with shorter follow-up times failed to show a difference [43]. Waid et al. performed a conversion from cyclosporine to tacrolimus in patients with elevated creatinine at least after 3rd post-operative month, randomly assigned to one of two parallel-groups (a 2:1 switch to tacrolimus or continue cyclosporine). A baseline biopsy confirmed chronic allograft nephropathy histological findings in 90% of enrolled patients. Two-year follow-up results were improvement in creatinine and lipid profile, with significantly fewer cardiovascular events without differences in acute rejection or new-onset hyperglycemia incidences after conversion [52]. However, results from the same study after 5 years of follow-up do not demonstrated impact on patient or graft survival [53]; these follow-up findings are almost shared by other multicenter randomized trials with similar design [65-68]. Some critics in these studies remained over the fact that some irreversible fibrosis at enrollment time limited the possibility of showing a less toxic profile of tacrolimus; other explanations include a higher exposure to mycophenolic acid in tacrolimus group (an interaction between cyclosporine and mycophenolic acid reduces plasma levels of the latter) and cyclosporine through levels relatively higher in patients with dysfunction late after transplantation [62,69-73].

A meta-analysis of 123 reports from 30 trials (including 4102 patients) showed that tacrolimus when compared to cyclosporine, improved graft survival (44% reduction in graft loss within 6th P.O. month), but doubled the risk of new diabetes mellitus requiring insulin; the authors suggest target concentrations of tacrolimus over the first years to maximize risk vs. benefit ratio [67,74,75].

Sparing of Calcineurin Inhibitor

Higher incidence of acute rejection reduces with time; that observation allowed the development of many protocols including an adjunct therapy to allow CNI reduction or withdrawal. Azathioprine was usually employed as a steroid-sparing agent. When used to substitute cyclosporine, cardiovascular risk and gout are reduced, but a higher incidence of acute rejection was also observed [43,76-79]. In CNI-based immunosuppression, patients with stable allograft function after first year of transplantation switched to azathioprine presented improvement of graft function but increased risk of acute rejection [80]. An open-labeled, early conversion of cyclosporine to azathioprine together with a temporary increase of corticosteroids was performed by Bakker et al., with a long-term follow-up (15 years), showing a superior graft survival among azathioprine patients, but without statistical significance [81]. A weak early immunosuppression obtained with azathioprine in early CNI withdrawal protocols and perhaps a publication bias about this medication justify only a few positive works and lack of negative data, due to unpublished bad results [43,56]. Due to higher net immunosuppressive characteristics, azathioprine was gradually substituted by the antiproliferative mycophenolate in clinical practice and protocols.

Despite this, nowadays a randomized, prospective multicenter trial is being conducted in Italy to compare mycophenolate mofetil versus azathioprine the effect on CAN prevention of as the sole immunosuppressive therapy for kidney transplant recipients (ATHENA study) [65]. The patients were previously submitted to a double induction regimen with basiliximab and low dose of thymoglobulin to increase safety of previewed cyclosporine withdrawal, without published preliminary results at present moment [66].

Patients in a CNI-based regimen with established CAN should benefit from conversion to CNI-free regimens, which include mycophenolate and/or sirolimus. This was tested by the Efficacy Limiting Toxicity Elimination (ELITE–Symphony study) which randomized 1645 patients to assess whether a mycophenolate mofetil–based regimen would allow administration of lower doses of adjunct immunosuppressive agents (namely cyclosporine, tacrolimus, or sirolimus), but yet still maintaining an acceptable rate of acute rejection and more favorable tolerability profile. Primary end-point was estimated glomerular filtration rate at 12 months after transplantation. They concluded a regimen with daclizumab, mycophenolate mofetil and corticosteroids combined with low-dose tacrolimus should be advantageous for renal function, allograft survival and acute rejection rates when compared with low-dose cyclosporine or low-dose sirolimus, or with standard dose of cyclosporine without induction [63]. The low C0 levels of sirolimus proposed in Symphony study were probably one of the causes of increased acute rejection rates [82].

The CONVERT trial prospectively evaluated randomic conversion of CNI immunossupression to sirolimus, comparing with maintenance of CNI. Despite a lower rate of maligancies among conversions to sirolimus, patients in this group with a renal function at time of conversion below 40 mL/min or in presence of significant proteinuria not only lack to ameliorate graft function but also failed as strategy to CNI spare in that population that theoretically was focused, once it deteriorated the remaining graft function and induced increase of proteinuria [70,71].

Systematic reviews and meta-analysis of randomized controlled trials designed to avoid CNIs showed small advantage to improve renal function with use of mycophenolate (MMF) [57,59] with an increased risk of rejection. In the case of MMF+sirolimus, the results were even worst, not only due to higher risk of rejection and reduced graft survival, but also with high discontinuation rates due to side effects [58,59].

A recently published trial named “Spare the nephron” proposed conversion of CNI stable patients to sirolimus, remaining with previous use of mycophenolate therapy in both arms. There was an evident benefit in improving graft function at first year, not sustained in the second year of follow-up. Other results were a fewer number of deaths in sirolimus+mycophenolate arm and a trend to less biopsy-proved acute rejections [72]. A recent multicentre, open label study considered an initial treatment with cyclosporine based on through concentrations, corticosteroids and induction with basiliximab, allocating patients to mantainance of CNI-based therapy or a conversion to everolimus and mycophenolate protocol. After first year of follow-up, renal function was improved in everolimus group, but biopsy proven rejection was higher in this group after conversion [73].

This strategy to not allocate patients to a mTOR inhibitor immediately after the post-operatory period aims to bypass the undesired effects of these immunosuppressants on attributable wound complications (as impairment of wound healing and fluid collections) frequently reported with early introduction of this therapy [82]. However, data are not unissonous. The recent published CALLISTO study was performed to evaluate if everolimus use in conversion to CNI strategy or in de novo strategy could induce biopsy-proven acute rejection, delayed graft function or wound healing complications. Once no differences were found, it seems that there would not be any benefit in delaying everolimus initiation [74].

Inhibitors of mTOR, as sirolimus and more recently, everolimus, have been proposed as ideal alternatives to definite substitution or sparing of CNIs, but nephrotoxicity (with increased proteinuria) specially when combined with CNIs, myelotoxicity and low immunosuppressive potency (with higher incidence of acute rejections when compared to CNIs) were revealed by many large trials. Despite the immunosuppressant effects over regulatory CD4+T cells (also due to an increasing transcription of FOXP3 protein), mTOR inhibitors have also immunostimulatory properties over monocytes, macrophages and peripheral dendritic cells [producing protein complex transcription factor NFkB, inducing upregulation of proinflammatory cytokines - interleukins (IL)-12, IL-23, IL-6, tumor necrosis factor (TNF); suppressing anti-inflammatory IL-10)] and blocking production of anti-inflammatory transcription factor encoded by STAT3. This somewhat dual behavior of mTOR inhibitors could restrict the future ability of this pharmacological class to be the immunosuppressor of first-choice in many protocols [69]. The ZEUS trial analyzed the early elimination of calcineurin inhibitor (after 4-5 months of use) switching it with everolimus. Despite an improvement in renal function at 12 months, the everolimus group presented higher rates of biopsy-proven acute rejection after randomization [73].

An interesting approach is the minimization of CNI doses with use of Everolimus. Both have synergistic effects on IL-2, and their use together allows low doses of both immunosuppressors (CsA more than double the AUC of everolimus), with high rates of graft survival, low rejection rates and good graft function demonstrated in some randomized controlled trials (RCT), but with short follow-up at this moment [59]. A recent single center RCT compared low exposure to CsA (C2-determined, beteen 200-350 ng/mL) plus high exposure to Everolimus (C0 determined, between 8-12 ng/mL) versus normal exposure to CsA+enteric-coated sodium mycophenolate (MPS). Everolimus group showed lower incidence of delayed graft function, slightly better 1-year graft survival rate, a significantly higher GFR and lower systolic blood pressure.

What is the Future of Cyclosporine in Immunosuppression?

Newer strategies considering scenarios of growing acceptance of borderline donors are in course. In these settings, a reduction of CNI exposure is interesting. Association of CNI low-dose and everolimus seems to be a promising alternative [59]. Double induction with basiliximab and low-dose tymoglobuline could be a step forward minimization of early oral immunosuppression [75], especially in early post-operative critical period, for example. The best immunosuppresive schema is always that designed to a specific scenario; a single and definitive protocol like “one fits all” is not a reasonable choice, once much variables are usually involved in the decision of the best schema.

Also new strategies to reduce cyclosporine nephrotoxicity have been recently proposed, as the experimental use of either erithropoietin (EPO) or carbamylated erithropoietin (CEPO) to reduce CsA tubular apoptosis, interstitial fibrosis and macrophage infiltration; an increased PI3 kinase activation and Akt phosphorylation was demonstrated, as an inhibition of TGF-b and type-1 collagen mRNA in cortical [76]. However, whether this can be applicable to humans and in which grade it would change the immunosuppressive ability of CsA is not already known.

In order to obtain an optimisation of immunosupression, in our center we adopted a strategy of measuring both serum C0 and C2 levels of cyclosporine in each visit, independently of time after transplant. This allows us a more individualized adjustment of therapy and is particularly important in the first 6 months. In our experience, not always an adequate C2 level assures an adequate level at the end of exposure (C0), that can be too low. If you have to choose between these two measures, however, we agree that C2 is superior to identify an adequate grade of immunosupression because reflects better the AUC, and also to guide a planned reduction when nephrotoxicity and chronic allograft nephropathy features are seen in biopsies.

Tacrolimus became the natural first-choice instead of cyclosporine in protocols using a CNI-inhibitor. It becames an even more interesting strategy after the production of a once-daily prolonged release formula, with similar safety and efficacy of tacrolimus in a BID administration [77,78]. In terms of adverse effects, when compared to cyclosporine, tacrolimus has a favorable profile in terms of cardiovascular traditional risk factors as hyperlipidemia and hypertension, but increases the risk of hyperglicemia and development of new-onset post-transplant diabetes in a higher grade than CsA [79]. Newer strategies including mTOR inhibitors associated with low-dose CNI have been proposed in de novo kidney transplants from elderly donors, using tacrolimus as CNI [83].

Cyclosporine remains important not only among different protocol studies but also in some specific clinical settings. Segmental and focal glomerulosclerosis (SFGS) is a podocyte disease that can induce nephrotic-range proteinuria and recurs in kidney transplants in 10-30% of cases. In the post-transplantation SFGS recurrence, treatment is basis on plasmapheresis associated with high doses of cyclosporine A. This is due to an attributable direct CsA anti-proteinuric effect against podocytes and actin cytoskeleton stabilization of podocytes [84]. Another situsation is the posterior reversible encephalopathy syndrome (PRES), in which both CNIs (tacrolimus and cyclosporine) have been reported as part of pathophysiologic mechanism (cytotoxic effects, endothelial dysfunction and vasogenic edema) [85]. The non-tolerated CNI in PRES syndrome should be discontinued or, in some cases, switched to the alternative CNI [86]. Despite the decreasing interest in new protocols using cyclosporine, we cannot consider it as a supplanted therapy and its role in immunosuppression is not yet finished.

References

Select your language of interest to view the total content in your interested language
Post your comment

Share This Article

Article Usage

  • Total views: 15269
  • [From(publication date):
    October-2015 - Jun 27, 2019]
  • Breakdown by view type
  • HTML page views : 11311
  • PDF downloads : 3958
Top