Received Date: December 01, 2011; Accepted Date: January 11, 2012; Published Date: January 13, 2012
Citation: Graff RJ, Duffy B, Xiao H, Radell J, Lentine KL (2012) The Role of the Crossmatch in Kidney Transplantation: Past, Present and Future. J Nephrol Therapeutic S4:002. doi:10.4172/2161-0959.S4-002
Copyright: ©2012 Graff RJ, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Visit for more related articles at Journal of Nephrology & Therapeutics
Since Murray’s successful transplant of an allogeneic kidney in 1959 , renal transplantation has evolved from an experimental technique into an accepted modality for the treatment of end-stage renal disease with more than 310,000 renal transplants having been performed by 2010 . This achievement constitutes a classical example of multidisciplinary collaboration: the description of the rejection of canine allografts by a surgeon , the appreciation by geneticists that the targets of rejection are inherited [4,5], the elucidation of the mechanism of allograft rejection by biologists , and the adaptation of this knowledge to the bedside by clinicians [1,7].
The two mechanisms employed to minimize transplant loss from rejection are the suppression of the recipient immune response with medications and immunogenetic characterization of the recipient. Early immunosuppressive medication regimens allowed transplantation of un-sensitized recipients, but transplantation of sensitized recipients was associated with immediate and early graft rejection. The most aggressive anti-rejection drug regimens available could not save these kidneys. The utilization of a complement dependent microcytotoxicty crossmatch (CDCXM) (an assay that measures cell bound antibody by its ability to bind complement and cause cell lysis) allowed the identification of recipient pre-sensitization to the donor kidney  as well as the recognition of the association between a CDCXM+ result and immediate graft loss , allowing an avenue for its avoidance
Over the last 50 years, as a result of improvement of immunosuppressive drug regimens and immunological evaluation techniques, kidney transplant outcomes have been greatly improved. The objective of this report is to review the evolution of crossmatch technique and associations with kidney transplant outcomes. Specifically, we review kidney graft outcome data in CDCXM, flow cytometry crossmatch (FCXM) and virtual crossmatch (VXM) negative (–) and positive (+) kidney transplant recipients. In several instances we updated our prior analyses using more recent Standard Transplant Analysis and Research Files (STAR) provided by the Organ Procurement and transplant Network (OPTN). We also review the role of antibody reduction therapy and post-transplant monitoring.
Because rejection-associated graft loss was observed in CDCXM– recipients , the initial CDCXM was refined to improve immunologic characterization of the recipient through the use of separated T and B lymphocyte target cells , the addition of wash steps , extended incubation , and the use of anti-human globulin (AHG) augmentation [12,13]. Flow cytometry technology was adapted to create a FCXM . The flow cytometer measures cell-bound antibody with a fluorescing label, the amount of antibody being quantified by fluorescence intensity. The unit of intensity is called a channel, and the difference between control and experimental is called channel shift. Although each laboratory sets its criteria for a positive test, a 40 channel shift for T cells and an 80 channel shift for B cells are generally considered a positive test. Finally, solid phase technology was developed with the ability to accurately identify anti HLA antibodies and applied as a virtual crossmatch (VXM) [15-17]. These advances allowed the detection of low titers of antibody that were previously undetectable.
Through the years, there has been an evolution toward the use of more sensitive crossmatch technologies as shown by analysis of data from the United States (U.S.) OPTN registry for kidney transplants performed from 1987 through 2005 . An updated analysis of the OPTN STAR files of transplants performed in 1995 through 2009 for this article demonstrates that the use of FCXM including both T and B cell reactivity as the most sensitive test increased from 17% to 58.3% (Figure 1). Simultaneously, the use of T cell AHG CDCXM and B cell crossmatch as the most sensitive test decreased from 37.5% to 26.6% and the use of less sensitive techniques alone declined more markedly from 45.5% to 15.1%. During the same period there has been an evolution from cellular to solid phase antibody screening, described below in the “virtual crossmatch” section. On Oct. 25, 2006, the United Network for Organ Sharing (UNOS) began requiring specificity information to identify unacceptable antigens, thus encouraging the use of single antigen beads (SAB) .
In their 1969 review, Patel and Terasaki reported outcomes on 413 transplants performed in 15 U.S. transplant centers . Twenty-four of 30 (80%) recipients transplanted with a CDCXM+ lost their grafts immediately, one graft was lost at 3 months, 2 grafts were surviving 3 months post-transplant, and 3 grafts were surviving more than 3 months after transplant. In contrast, only 4 of 168 (2.4%) recipients transplanted with CDCXM–/panel reactive antibody (PRA)– results suffered immediate graft loss. As a consequence of that report, the presence of positive crossmatch has generally been considered a contraindication to kidney transplantation, although some transplants do proceed after positive crossmatch results, especially when the crossmatch is performed by the most sensitive techniques .
Analyses of outcomes of recipients transplanted with positive crossmatch results have shown improvement since the classical study of Patel and Terasaki. In Gebel, Bray and Nickerson’s 2003 review of 23 reports, the median one-year graft survival reduction associated with CDCXM+ and/or FCXM+ results was 12% among first transplant recipients and 35% among re-transplant recipients . In a single center study with longer follow-up, Mahoney et al reported that of 22 transplants performed after FCXM+ results, twelve were lost in the first two months, but the remaining 10 were still functioning at two years . As an outgrowth of these reports, a commonly held belief developed that patients transplanted after a positive crossmatch who avoided early graft loss faced no greater long-term risks than patients transplanted after a negative crossmatch.
In an analysis of OPTN registry data for transplants performed in 1995-2007, Graff et al.  found contrary results. Specifically, they observed that FCXM+ compared to FCXM- results were associated with a 4-12% reduction in five year graft survival depending on type of donor and the lymphocyte target used, and that patients transplanted with FCXM+ results continued to show decreased graft survival beyond the first year . In a subsequent study detrimental effect was again observed in years 1 to 5 after transplant, but no detrimental effects were seen in the 5 to 10 year period . These studies will be described in more detail in a subsequent section.
Theoretically, a more sensitive test would identify a positive crossmatch not identified by a less sensitive test and result in better outcomes. The benefit of the use of more sensitive crossmatch techniques was addressed by Salvalaggio et al. [21,22] in an analysis of OPTN registry data for transplants from 1987 through 2005 . By multivariate Cox regression, compared with T AHG CDCXM– /B– crossmatch results, T–B– FCXM results were associated with a significantly lower incidence of acute rejection during the first one year after transplant (aOR=0.85, P<0.0001). Five-year graft survival after transplant with T–B– FCXM (82.6%) was modestly better than after T– AHG CDCXM/B– crossmatch (81.4%, P= 0.008) or T– AHG CDCXM (81.1%, P< 0.0001), but on adjusted analysis was significantly different only among recipients from deceased donors and patients aged > 60 years. An updated analysis of OPTN registry data from 1995–2009, performed by the authors for this article, showed similar results (Table 1). Thus, more sensitive techniques has had little effect on the outcome of recipients with negative crossmatch results.
|% Survival (P)||aHR (P)|
|vs. T AHG CDCXM &B||81.1 (0.07)||1.0 (.56)|
|vs. Other||80.3 (0.01)||1.1 (.04)|
|vs. T AHG CDCXM &B||68.6 (0.001)||1.1 (0.0006)|
|vs. Other||69.1 (0.002)||1.1 (0.001)|
Produced from OPTN Standard Transplant Analysis and Research Files for transplants performed in 1995–2009 , Xiao H and Lentine KL, 2011.
Table 1: Five year graft survival after a negative crossmatch, stratified by technique (1995 –2009).
As noted, the presence of a positive crossmatch has generally been considered a contraindication to kidney transplantation, but not infrequently, patients with FCXM+ results are transplanted and less frequently, transplants proceed with CDCXM+ results [18,24]. Although some of the patients transplanted with a positive crossmatch result may be inadvertent, others may have been transplanted purposefully considering the risk of transplanting with a positive crossmatch result to be less than remaining on dialysis. Of those transplanted purposefully after a positive crossmatch, some may have received antibody reduction therapy that did not remove all antibody. Current OPTN STAR files do not include information that allow the identification of positive crossmatch recipients treated with antibody reduction therapies.
Outcomes in FCXM+ recipients
In a 2008 report, Lentine et al.  examined OPTN Registry data from January 1995 through November 2007, to characterize 5-year outcomes of 66,590 kidney transplants performed after FCXM . Outcomes of FCXM+ transplants are shown in Table 3. Based on target (T cell, B cell or un-separated lymphocytes) and test results (negative, positive, weak positive, not measurable), outcomes could be divided into 14 groups, only three of which (T+B+, T+B not measurable and T–B+ FCXM) showed consistently reduced graft survival compared with T–B– FCXM results (Table 2). The T–B–, T– B weak positive, T– B+ FCXM groups were particularly revealing. Graft survival after T– B weak positive crossmatches (representing 401 deceased and 366 living donor transplants) was not significantly different from graft survival with T–B– FCXM results. In contrast, transplants performed after a T– B+ FCXM had significantly inferior graft survival compared with the reference group. If weak positive results can be considered to represent low titer, these results indicate a correlation between titer and outcome.
|Deceased Donor Transplants||Living Donor Transplants|
|FCXM result||aHR||95% CI||P-value||aHR||95% CI||P-value|
|T+, B not measured||1.44||1.12–1.87||0.005||1.40||1.02–1.93||0.04|
aHR, adjusted hazard ratio.
Regression models were adjusted for the following factors as covariates: recipient age category (<18, 19-30, 31-45, 46-60, or >61), gender, race (black, white, or other), Hispanic ethnicity, cause of end-stage renal disease (diabetes, hypertension, glomerulohephritis, other), duration of pre-transplant dialysis, peak panel reactive antibodies (<10%, 11%-30%, or >30%), retransplant status, and comorbidities (diabetes, hypertension, peripheral vascular disease, chronic obstructive pulmonary disease); category, gender, race, hypertension, diabetes, and subtype for deceased donors (expanded criteria, donation after cardiac death); degree of HLA mismatch (0 ABDR, 0 DR, or DRmismatch), cold ischemia time. All models were stratified by transplant year.
*Adapted from Table 4 in  with permission of the publisher.
Table 2: Significant associations of flow cytometry crossmatch results with the relative risk of five-year allograft loss by multivariable Cox regression, adjusted for clinical covariates at transplant (1995 –2007).*
As previously noted, Graff et al challenged the dogma that a positive crossmatch result has no detrimental effects beyond the first year after transplant [20, 23]. A more detailed Kaplan-Meier analysis of OPTN registry data from 1995 through 2009 revealed that among 23331 living donor (LD) recipients, in the 24 hour, 0–1, 1–5, 0–5, 5–10 and 0–10 year periods, graft survival with T+ FCXM results was reduced 0.8% (P=0.06), 6.4% (P<0.0001), 5.2% (P=0.01), 10.2% (P<0.0001), 3.8% (P=0.32) and 9.9% (P<0.0001) respectively compared with T-B-FCXM results . (Based on the premise that the large majority of T+B not measurable crossmatches are T+B+, and the similarity of outcomes with T+B+ and T+B not measurable FCXM, these latter groups were combined into a group labeled T+ for subsequent analysis.) Outcomes of FCXM+ transplants are shown in Table 3. In the same time period, absolute graft survival with T-B+ FCXM was reduced by 0.4% (P=0.05), 1.9% (P=0.002), 5.3% (P<0.0001), 6.6% (P<0.0001), 0% and 3.7% (P<0.0001). Among 29819 DD recipients, absolute graft survival with T+ FCXM results was reduced 1.3% (P=0.001), 4.9% (P<0.0001), 1.9% (P=0.36), 5.4% (P=0.0003), 4.6% (P=0.92) and 6.2% (P=0.001) and graft survival with T–B+ FCXM was reduced 0.6% (P=0.02), 1% (P=0.13), 3.6% (P=0.01), 4% (P=0.02), 4.3%(P=0.08) and 5.3% (P=0.0004). Multivariate Cox regression showed no significant negative effect for any group at 24 hours or the 5-10 year period, but significant detrimental effects for T+ FCXM at 1 year, T–B+ FCXM during the 1-5 year period, and both T+ and T–B+ FCXM at 0-10 years among LD and DD recipients. Thus, both early and late effects were present after transplant with T+ and T-B+ FCXM, with T+ FCXM having its strongest effect in the first year, T–B+ FCXM demonstrating its strongest effect in years 1–5, and neither having a significant effect in years 5-10, but all having a significant effect over the 10 year period.
|0-24 Hour||0-1 Year||1-5 Year||0-5 Year||5-10 Year||0-10 Year|
|S% (p)||aHR (p)||S% (p)||aHR (p)||S% (p)||aHR (p)||S% (p)||aHR (p)||S% (p)||aHR (p)||S% (p)||aHR (p)|
|vs. LD T+||98.6 (0.06)||1.5 (0.22)||89.4* (<.0001)||1.8* (<.0001)||80.9* (0.01)||1.1 (0.50)||72.3* (<.0001)||1.4* (0.0002)||65.7 (0.32)||1.0 (0.98)||47.5* (<.0001)||1.3* (0.0005)|
|vs. LD T-B+||99.0 (0.05)||1.1 (0.72)||93.9* (0.002)||1.1 (0.50)||80.8* (<.0001)||1.3* (0.005)||75.9* (<.0001)||1.2* (0.007)||70.8 (0.72)||1.0 (0.88)||53.7* (<.0001)||1.2* (0.008)|
|vs. DD T+||97.8* (0.001)||1.6 (0.09)||86.5* (<.0001)||1.4* (0.003)||75.6 (0.36)||1.1 (0.56)||65.4* (0.0003)||1.2* (0.01)||54.0 (0.92)||1.0 (0.94)||35.3* (0.001)||1.2* (0.02)|
|vs. DD T-B+||98.5* (0.02)||1.2 (0.32)||90.4 (0.13)||1.0 (0.60)||73.9* (0.01)||1.2* (0.02)||66.8* (0.002)||1.1 (0.18)||54.3 (0.08)||1.2 (0.12)||36.2* (0.0004)||1.1 (0.08)|
|LD T-B- CDCXM/FCXM-||99.5||97.0||91.0||88.2||81.3||71.8|
|vs. LD T+ CDCXM/FCXM+||95.9* (0.001)||1.9 (0.39)||85.5* (<.0001)||2.4* (0.03)||96.2 (0.58)||0.3 (0.18)||82.2* (0.006)||1.3 (0.49)||75.0 (0.38)||0.9 (0.94)||61.7* (0.004)||1.3 (0.48)|
|vs. LD T-B+ CDCXM/FCXM+||99.1 (0.60)||0.6 (0.56)||91.7* (0.006)||1.0 (0.99)||89.0 (0.41)||0.6 (0.30)||81.5* (0.01)||0.8 (0.43)||NA||NA||NA||NA|
|DD T-B- CDCXM/FCXM-||99.0||93.8||84.9||79.7||77.0||61.3|
|vs. DD T+ CDCXM/FCXM+||95.7* (0.03)||2.8 (0.16)||89.1 (0.16)||1.1 (0.83)||88.5 (0.55)||0.5 (0.27)||78.8 (0.60)||0.8 (0.53)||NA||1.7 (0.47)||NA||NA|
|vs. DD T-B+ CDCXM/FCXM+||99.0 (0.92)||0.3 (0.24)||86.6* (0.003)||1.1 (0.73)||87.4 (0.91)||0.7 (0.31)||75.7 (0.07)||0.9 (0.56)||84.4 (0.59)||0.5 (0.24)||63.9 (0.18)||0.8 (0.26)|
* P<0.05, S(%), survival fraction from start to end of the interval.
Produced from OPTN Standard Transplant Analysis and Research Files for transplants performed in 1995–2009 , Xiao H and Lentine KL, 2011. The sample with FCXM results comprised 23331 LD and 29819 DD recipients. The sample with CDCXM and FCXM results comprised 6736 LD and 6210 DD recipients.
Table 3: Graft survival in crossmatch+ compared with crossmatch– transplants in the 24 hr, 1, 1–5, 0–5, 5–10 & 0–10 year intervals (1995-2009).
Outcomes in CDCXM+ recipients
Another recent analysis of OPTN data for 10,261 eligible LD transplants performed from 1995 through 2009, CDCXM+ results were present in 1044 (10.2%) of these transplants, of which 339 were T+ CDCXM and 705 were T-B+CDCXM. Of 15,438 eligible DD transplants, the CDCXM+ results were present in 1221 (7.9%), of which 396 were T+ CDCXM and 825 were T–B+ CDCXM . An update of the prior analysis using more recent OPTN STAR files is shown in Table 3. Among LD recipients, in the 24 hour, 0–1, 1–5 and 0–5 year periods, graft survival with T+ CDCXM results was reduced 3.6% (P=0.001), 11.5% (P<0.0001), 0% and 6% (P=0.006) respectively compared to T–B– CDCXM; graft survival with T–B+ CDCXM was reduced 0.4% (P=0.6), 5.3% (P=0.006), 2% (P<0.41), and 8.7% (P=0.01) in the same time periods. Among DD recipients, graft survival with T+ CDCXM results was reduced 3.3% (P=0.03), 4.7% (P=0.16), 0% and 0.9% (P=0.6) and graft survival with T–B+ CDCXM was reduced 0%, 7.2% (P=0.03), 0% and 4% (P=0.07). Data were not sufficient for analyses beyond 5 years. Cox analysis revealed a significant negative effect to be limited to LDT+CDCXM (aHR=2.4, P=0.03). Interestingly, the 1–5 year effect seen with FCXM+ was not seen with CDCXM+ tests.
Certainly, in current studies and probably in the Patel-Terasaki study , transplants performed after positive crossmatch results reflect a small minority of all potential crossmatch recipients. In comparing current results with those of Patel and Terasaki, an important consideration is the evolution of selection factors in the decision to proceed with crossmatch positive transplantation. In the Patel- Terasaki report, of the 413 transplants, 92% were first transplants, 80% were PRA-, 62% were male and 65% were LD recipients, a distribution associated with relatively good outcome. The report did not offer a breakdown for patients transplanted with CDCXM+. In a recent study reviewing OPTN registry, 1995-2009, outcomes in 25,699 transplant recipients, 85% were first transplants, 15% % were PRA–, 59.5% were female and 40% were LD recipients, Patients with most recent PRA>50%, re-transplant, and female, DD recipients were significantly over-represented among CDCXM+ recipients. The distribution of all patients in the current study and particularly the patients transplanted with CDCXM+ are associated with relatively poor outcome . Thus patient demographics do not appear to explain improved outcome in the current era. It should be noted that these traits define groups with reduced opportunities for transplantation and suggest that centers are willing to accept inferior outcomes in order to expand transplant access to disadvantaged patients.
The greatest change in outcome in crossmatch positive transplant recipients since 1969 has been the virtual elimination of immediate graft loss (20,23,24) (Table 3). Although there was no presentation of pathological information, it has been assumed that the major cause of immediate graft loss in those transplanted with a positive crossmatch in the Patel and Terasaki series  was hyperacute rejection. Clearly, improved immunosuppressive regimens have played a major role in improving graft loss beyond the perioperative period, but it seems unlikely that they would be able to prevent hyperacute rejection. Although improved surgical and preservation techniques certainly have played a role in reduction of immediate graft loss since 1969, they cannot explain the difference in graft loss among crossmatch positive and negative recipients in the 1969 study. A possible explanatory factor for the markedly lower risk of hyperacute rejection with crossmatch positive transplants in modern practice may be selection of recipients with low anti-HLA titers. The relatively insensitive CDCXM technique of 1969 undoubtedly required the presence of high titers of antibody to show a positive reaction in contrast to today’s sensitive crossmatch techniques, which have undergone multiple modifications to increase sensitivity. Although titer data are not available in either Patel- Terasaki’s study or current OPTN records, the previously noted good outcomes in recipients with weak FCXM+ results, which could be considered low titer, could support the hypothesis that titer has an effect on outcome. In reports dealing with antibody reduction therapy, both Gloor et al.  and Montgomery et al.  have reported poorer outcome when the recipient had a CDCXM+ result (indicative of a relatively high antibody titer) than when donor specific antibody (DSA) antibody was present with a CDCXM– result (indicative of a relatively lower titer). These observations will be considered in more detail in the next section.
If titer is a determining factor in the effect of antibody on outcome, then eliminating, reducing or modulating antibody might have a beneficial effect on kidney transplant outcome. Antibody reduction/ modulation therapy has been used to treat many immunologically related diseases (reviewed in . With this background, protocols have been initiated to reduce/modulate antibody levels in sensitized potential kidney transplant recipients. Early protocols utilized immunoabsorbants and plasma exchange . Intravenous immunoglogulin (IVIG) [29-31] anti B cell agents [29,31] and, transiently, splenectomy [25,26] were later added. The protocols at most centers treated potential LD recipients that had crossmatch positive with their prospective donors and the protocol of at least one center treated potential DD recipients with high PRA levels . Although all centers using antibody reduction protocols attempted to render potential recipients antibody-free, most transplanted patients with residual antibody [25,26,31,32]
The Cedar Sinai program transplanted 45 DD and 31 LD HLAsensitized recipients between July 2006 and February 2009 with IVIG and Rituximab therapy . Although class I PRA was reduced 12.6%, class II, 10% (P=0.01) and the T cell FCXM channel shift by 125, many of the recipients had residual donor specific antibody at the time of transplantation. Patient survival at two years was 100% for LD recipients and 90% for DD recipients. Graft survival at two years was 90% for LD recipients and 80% for DD recipients.
The Hopkins program reported attempted desensitization (plasmapheresis and IVIG with transient use of splenectomy) of 215 patients with 211 receiving LD kidneys between February 1998 and December 2009 . Patient survival was compared to that of demographically matched dialysis patients. The 1, 3, 5 and 8 year patient survivals of 90.6%, 85.7%, 80.6% and 80.6% in the desensitized group were clearly better than the 91.1%, 67.2%, 51.5% and 30.5% patient survivals in the dialysis group. Stratification of transplant recipients receiving antibody reduction therapy showed best outcomes in recipients who were FCXM– and DSA+ as defined by antibody screen, at the beginning of therapy, intermediate outcomes in FCXM+/ CDCXM– recipients and worse outcomes in CDCXM+ recipients. Overall patient survival was clearly better than for those waiting for a crossmatch negative kidney or those remaining on dialysis. No graft outcome data were offered in that publication.
University of Maryland reported on 41 plasmapheresis and IVIG treated and 41 crossmatch negative control recipients transplanted between February 1999 and October 2006 . The authors deemed that the difference in graft survival at one year (7.7%) was acceptable. The year graft survival in the treated group was 69.4% compared with 89.9% in the control group, a difference of 11.2%. Among the treated transplant recipients, those who were T–B– FCXM at the time of transplant had a five-year 87% graft survival compared to a 53% graft survival for recipients who were FCXM+ at the time of transplant. Thus the deterioration of five year outcome in the antibody reduction treated group was limited to those with residual antibody at the time of transplant.
The Mayo Clinic program reported outcomes on 189 patients transplanted between April 2000 and September 2007, 51 T+ AHG CDCXM and 37 T– AHG CDCXM/FCXM+ recipients with channel shifts >300 treated with combinations of plasmapheresis, IVIG, rituximab and transiently splenectomy, and 30 T– AHG CDCXM/ FCXM+ with channel shifts <300 and T–B–FCXM recipients that were untreated . Recall that a 40 channel shift for T cells and an 80 channel shift for B cells is generally considered to be a positive test. Although there was a significantly higher rate of graft loss in treated T+ AHG CDCXM recipients (24 out of 56) compared to other groups, (HR 7.71, P=0.0001), the differences between the treated T+ AHG CDCXM/FCXM+ with channel shifts >300 (2 out of 37), untreated T– AHG CDCXM/FCXM+ with channel shifts <300 (1 out of 30) and T–B– FCXM (0 out of 70) groups were not significant (P=0.57), once again showing better outcomes with recipients with presumably lower amounts of antibody.
In contrast to the crossmatch which is designed to identify the presence of antibodies in a given serum directed at the antigens of a particular donor, the antibody screen is designed to survey all antibodies present in a given serum, as indicated by its reactivity with a panel of antigen-bearing targets, lymphocyte or artificial platform. Antibody screening reports the presence or absence of antibody, and, if present, the percent of the panel with which the serum reacts (panel reactive antibody, or PRA). Because each target contains multiple antigens, it cannot be directly known what antibodies are present. By using large diverse panels the antibodies present can be indirectly identified by the serum reactivity pattern. This technique had limited correlation with the crossmatch. As with crossmatch technology, antibody screening also has evolved, making available increased specificity and sensitivity. The modifications described for the CDCXM also were used for antibody screening. This was followed by the development of “solid phase technology”, i.e. the ability to solubilize HLA antigens and bind them to the wells of plastic trays  and to beads . Further refinements allowed the synthesis of HLA antigens and attachment of each antigen on a separate SAB . The presence of antibody adhering to SAB is measured with a fluorescing abel and fluorescence is quantified as mean fluorescence index (MFI).
With the use of solid phase antibody screening in general and SAB in particular, the concordance with crossmatch results has improved, allowing the antibody screen to accurately predict crossmatch results, leading to the coining of the term VXM. The correlation was good enough that in 2006, UNOS deemed that the presence of antibodies in a patient’s serum against antigens of a potential donor makes that patient ineligible for being crossmatched with that donor . Nevertheless, because a patient with a high titer of anti HLA antibody and no antibody directed at the antigens of a prospective donor has a high chance of a negative crossmatch, more high PRA patients are being crossmatched and transplanted .
The follow-up period on the VXM data is relatively short and associated with conflicting reports. Levine et al  found no detrimental effect on outcome associated with the presence of DSA when the MFI was <2000 and Morris et al.  and Lazarova et al.  found no outcome difference in crossmatch negative recipients with and without DSA. Conversely, Lefaucheur et al.  reported decreased graft survival in recipients with a negative crossmatch and the presence of DSA, as well as correlation between the level of MFI, antibodymediated rejection and graft loss. Caro-Oleas et al.  reported decreased graft survival in recipients with a negative crossmatch and the presence of DSA, but no correlation between graft loss and MFI with no mention of antibody-mediated rejection. Amico et al.  found that DSA+ recipients exhibiting antibody-mediated rejection showed reduced graft survival while those not exhibiting antibodymediated rejection did not. Clearly, SAB technology has increased our ability to identify antibody specificity. In order for this information to translate into an accurate VXM, the complete donor HLA phenotype must be known, including HLA A, B, Cw, DR, DQ, and DP .
As noted throughout, following successful transplantation, some recipients tolerate their allografts, while others suffer rejection episodes which require augmented therapy. The ultimate evidence for rejection is kidney biopsy evidence and deterioration of graft function. Scientists have sought a test less invasive than biopsy and with earlier recognition than deterioration of function, with the expectation that early recognition of rejection would allow more effective treatment [42-44]. Although a detailed discussion of these techniques is beyond the scope of this review it should be noted that a correlation exists between the presence of DSA and rejection (particularly antibody-mediated rejection) and graft loss. This is true for recipients developing new DSA post-transplant  and those with pre-transplant DSA whose DSA persists post-transplant [46,47]. Interestingly, recipients with DSA at the time of transplant whose DSA disappear post-transplant do well . Pretransplant demographics have not been useful in distinguishing those whose DSA will disappear from those whose DSA will persist . Under most circumstances, donor cells are not available, obviating the use of the classical crossmatch, but solid phase antibody screening has been an effective method of demonstrating DSA.
Since the hallmark study of Patel and Terasaki, the outcomes of patients transplanted with positive crossmatch results has greatly improved. A possible explanation may be selection of recipients with low anti-HLA titers. Although the VXM adds information on the presence of DSA, we have only sparse data on the outcome implications of such results when the actual crossmatch is negative or “borderline” positive. In some centers, both of these circumstances result in the elimination of such a potential recipient from consideration for transplant without further testing. Centers employing antibody reduction protocols report good early outcome, although one program reports decreased five-year graft survival results in recipients transplanted with residual DSA after completion of antibody reduction therapy. Post-transplant demonstration of the persistence or appearance of DSA is of value in directing monitoring. Current technology must be modified or new technology developed that will differentiate transplants with acceptable from unacceptable immunologic risk. Further work to prospectively determine under what circumstances crossmatch positive transplants can prospectively precede with safety is warranted.
Portions of the data used in the reported analyses have been supplied by United Network for Organ Sharing as the contractor for the Organ Procurement and Transplantation Network (OPNT). The analysis, interpretation and reporting of these data are the responsibility of the authors and should in no way be seen as representing official policy of or interpretation by the OPTN or the U.S. Government.
Make the best use of Scientific Research and information from our 700 + peer reviewed, Open Access Journals