alexa HEAT INTEGRATION AND RENEWABLES – RECENT DEVELOPMENTS AND ACHIEVEMENTS | Open Access Journals
ISSN: 2229-8711
Global Journal of Technology and Optimization
Make the best use of Scientific Research and information from our 700+ peer reviewed, Open Access Journals that operates with the help of 50,000+ Editorial Board Members and esteemed reviewers and 1000+ Scientific associations in Medical, Clinical, Pharmaceutical, Engineering, Technology and Management Fields.
Meet Inspiring Speakers and Experts at our 3000+ Global Conferenceseries Events with over 600+ Conferences, 1200+ Symposiums and 1200+ Workshops on
Medical, Pharma, Engineering, Science, Technology and Business

HEAT INTEGRATION AND RENEWABLES – RECENT DEVELOPMENTS AND ACHIEVEMENTS

Jiří Jaromír Klemeš*, Petar Sabev Varbanov

Faculty of Information Technology, University of Pannonia, Veszprém, Hungary

*Corresponding Author:
Jiří Jaromír Klemeš
Faculty of Information Technology, University of Pannonia, Veszprém, Hungary
E-mail: [email protected]

Received date: May 2011; Revised date: June 2011; Accepted date: August 2011

Visit for more related articles at Global Journal of Technology and Optimization

Abstract

Process Integration (PI) is a powerful tool for designing and optimising processes for energy efficiency and sustainability. It has been widely extended and become both a part of most good degree studies curricula as a routine tool for advanced design and optimisation in various industries. However, sometimes its simplicity is still misunderstood. Even PI and in this contribution specifically heat integration (HI) has some potential pitfalls related to the problem formulation and data extraction. Regardless of the precision used, the results largely depend on solving the correct problem – i.e. if the formulation reflects the reality adequately and if the appropriate data have been extracted. An incorrect data extraction has been the reason for conclusions that PI did not work. When revisiting most of those problems, it becomes obvious that it was not a fault of the PI methodology, but an inexperienced user.

Keywords

Process Integration, Renewable Integration

History of PI Development

Energy and water saving, global warming and greenhouse gas emissions have become major technological, societal, and political issues. Being closely related to energy supply, they are of strategic importance. Numerous studies are performed for improving the efficiency of energy supply and utilization, while reducing emissions of greenhouse gases, volatile organic compounds and other pollutants. As a response to these industrial and societal requirements several novel methodologies emerged 1977. Two of them have been “Process System Engineering” by Sargent [1,2] and “Process Integration” by Linnhoff et al. [3] followed by a number of works from the UMIST Group. Both disciplines involved in dedicated conferences as ESCAPE - European Symposium on Computer Aided Process Engineering facilitated by European Federation of Chemical Engineering Working Party on Computer Aided Process Engineering [4] and PRES - Conference on Process Integration, Modelling and Optimisation for Energy Saving and Pollution Reduction [5] which is being annually supported by Chemical and Chemical Engineering Societies (Hungarian Chemical Society, Czech Society of Chemical Engineering, Italian Association of Chemical Engineering, The Canadian Society for Chemical Engineering). Gradually it has become apparent that resource inputs and effluents of industrial processes are often connected to each other. Examples of this include: (i) Reducing external heating utility is usually accompanied by an equivalent reduction in the cooling utility demand [6,3]. This, obviously, also tends to reduce the CO2 emissions from the corresponding sites. (ii) Reduction of waste water effluents in most cases also leads to reduced fresh water intake [7-9].

PI is a family of methodologies for combining several processes to reduce consumption of resources or harmful emissions to the environment. It started as mainly heat integration stimulated by the energy crisis in the 1970's [3,6,10-13]. This energy saving methodology has been extensively used in the processing and power generating industry over the last 30 years. This method examines the potential for improving and optimizing the heat exchange between heat sources and heat sinks via the use of heat exchangers in order to reduce the amount of external heating and cooling requirements, thereby reducing costs and emissions. A systematic design procedure has been developed to provide the final energy saving design of the system.

Heat Integration (HI) using Pinch Technology has got several definitions, almost invariably referring to the thermal combination of steady-state process streams or batch operations for achieving heat recovery via heat exchange. More broadly, the definition of HI, adopted by the International Energy Agency [14] is: Systematic and General Methods for Designing Integrated Production Systems ranging from Individual Processes to Total Sites, with special emphasis on the Efficient Use of Energy and reducing Environmental Effects.

HI methodology is a firstly developed part of Process Integration and provides the design foundation for CHP (combined heat and power) systems, refrigeration, air conditioning and heating with pump systems. It is equally applicable for small, medium and big industrial sites such as oil refineries with petrochemicals production and power stations. The technology answers one of the major challenges in the design of heating and cooling systems, namely, the complexity of energy and power integration by providing a mapping strategy based on thermodynamically derived upper bounds on the system thermal and power performance. The efficient use of available heating and cooling resources for serving complex systems of various sizes and designations can significantly reduce energy consumption and emissions. This methodology can also be used to integrate renewable energy sources such as biomass, solar PV and solar thermal into the combined heating and cooling cycles. Since 1995 the energy consumption of the EC member countries has risen by 11 %, to the value of 1,637 Mt of oil equivalent [15]. This increase in energy consumption contrasts with the population of the EC member states which is growing at a much slower rate – approximately 0.4 %/y [15]. The overall share of total energy consumption by industry is declining in most countries. For example, in the UK domestic energy consumption has risen from 35.6 Mt of oil equivalent to 48.5 Mt in the period from 1971 to 2001, an increase of 36 %, despite energy efficiency increases [16].

Process Integration technology (or Heat and Water Integration/Pinch Technology) has been extensively used in the processing and power generating industry over the last 35 years and was pioneered by the Department of Process Integration, UMIST (now the Centre for Process Integration, CEAS, The University of Manchester) in the late 1980’s and 1990’s. The detailed methodology of HI is described in more detail elsewhere [17]. Beside HI they have been some generic developments of Process Integration as Water/Mass, Hydrogen, Oxygen Integration as well as combined Energy/Water [18], Oxygen/Water Integration [19].

It is remarkable that PI has not lost the interest of researchers for 35 years and even has been flourishing recently. The heat integration methodology has proved to have a considerable potential for groups of processes in chemical processing sites, thereby reducing the overall energy demand and emissions across the site, and leading to a more effective and efficient site utility system. The method is also able to address the production of cogeneration shaft power. Further details are available elsewhere [3,20-23].

One of the first related works was that by Hohmann [10] in his PhD thesis at the University of Southern California. This work was the first to introduce a systematic thermodynamics-based reasoning for evaluating the minimum energy requirements for a given HEN synthesis problem. This work was continued by Linnhoff and Flower in the late 1970's. They used Hohmann’s foundation and in 1977 developed the basis of Pinch Technology, which is now considered as the cornerstone of Heat Integration. As usual in case of a pioneering innovation, it was difficult to publish. The first publications appeared in 1978 [6] due to their strong commitment. This has become the most cited paper in the history of chemical engineering. A similar work has been done in Japan more or less in parallel [25,26]. However, it was Linnhoff supported by strong teams from UMIST and later Linnhoff March Ltd who pushed the new concept through the academia and industry. The publication of the first “red” book by Linnhoff et al. [3] played a key role in the dissemination of heat integration. More recently the book received a new Foreword [3] and content update. This User Guide through pinch analysis provided an insight into the most common process network design problems including heat exchanger network synthesis, heat recovery targeting, and selecting multiple utilities.

These methodologies were developed and pioneered by the Department of Process Integration, UMIST (now the Centre for Process Integration, CEAS, and The University of Manchester) in the late 1980’s and 1990’s [3,13,21,27,28]. A second edition of the Process Integration User Guide [3] was recently published by Kemp [23]. The most recent book was presented by Klemeš, et al. [17]. A specific food industry overview of Heat Integration was presented by Klemeš and Perry [29] and Klemeš et al. [24]. Foo et al. [30] successfully applied the Pinch Analysis approach to carbon-constrained energy sector planning. Total Sites remain an active area of research for practical applications – e.g the recent work by Hackl et al. [31].

Another very important part in process design and optimization is the synthesis phase of the process flowsheets. From the very early stage they have been attempts to combine PI and Optimization e.g. [32]. This is usually performed directly or after the targeting phase outlined earlier. Ideally, the structure of the entire process and the configurations of the operating units within it should be optimally and simultaneously synthesized and designed because their performances influence each other. The main source of complexity is the dual nature of the problem - continuous as well as discrete. Several approaches to performing this task are known – including heuristic, evolutionary and superstructure-based. Two major classes of methods for process synthesis are heuristic and algorithmic, or mathematical programming, methods. Inevitably, hybrid methods have been proposed, which resort to both heuristic rules and mathematical programming: As in any other human activities involving decision making, including engineering, process synthesis, or design, is an activity for which no past experience can be ignored, especially when it comes to localized details of the design. The most popular approach is to create a superstructure for the network being designed and then choose the best possible solution network using the superstructure options.

Successful Applications

They have been numerous successful applications and we attempt to provide at least a short overview of selected implementations of the HI methodology for various industrial case studies. The presentation is somewhat condensed because of space limitations; more information is available in the works cited.

The main filed of implementation traditionally have been chemical, oil refinancing and petrochemical industries. The heat exchanger network (HEN) of a fluid catalytic cracking (FCC) unit process consisted of a main column and a gas concentration section was retroffited by [33]. A crude-oil preheating system retrofit problem was studied by [34]. Tovazhnyansky et al. [35] presented process integration of sodium hypophosphite production. Matsuda et al. [36] made an interesting study of energy saving in the reaction section of the hydro-desulfurization process with self-heat recuperation technology.

Many studies have employed pinch technology (and its associated heat integration analysis) in the food-processing industry. This industry has a far different thermodynamic profile than that of the refining and petrochemical industries. The food-processing industry is characterized by process streams of relatively low temperature (normally 120–140 °C), a small number of hot streams, low boiling-point elevation of food solutions, and considerable deposition of scale in evaporators and heat recovery systems, and seasonal operation. Case studies related to energy efficiency in sugar industry were presented by e.g. Klemeš et al. [37], Grabowski et al, [38, 39]. Heat integration analysis of a brewery with considerable energy saving were presented by Hufendiek and Klemeš [40], Klemeš, Kimenov and Nenov [41] presented a comprehensive study covering a sugar plant, raw sunflower oil plant and corn crystal glucose plant. A case study of a whisky distillery by Smith and Linnhoff [42] (see also [43]) provides another example of how HI can reduce energy use and increase energy efficiency. Another study involving a whisky distillery was made by Kemp [23]. Fritzson and Berntsson [44] studied a Swedish slaughtering and meat-processing plant. A case study analysing a sugar plant by heat integration methodology in a developing country has been published by Raghu Ram and Rangan Banerjee [45]. Several case studies have been documented by DETR [46,47]. They developed a waste heat recovery potential for the UK only of 8.3 PJ/y which in that time prices represented around 14 M£, more than 20 M£ in today cost of energy. They concluded that in the dairy industry, the pasteurisation process is already highly efficient in terms of heat recovery (up to 95 %), but sterilisation which is more energy intensive with bottle sterilisation consuming 300-500 MJ/t. They mentioned several energy saving measures implemented by Associated Dairies plant. The other food and drink processes reviewed by DETR [46,47] were bakeries, breweries, drying in production of flavourings and ingredients and a developed example of breadcrumb dryer plant where about 30 % energy saving potential was identified.

A number of case studies has been completed by Linnhoff March - KBC Advanced Technologies [48]. The details are mostly confidential, but the publicity information can be obtained from the company.

Work published Soh, Wan Alwi, and Manan [49] is an example of wide implementation in pulp and paper industry. An interesting HI related application is an analysis of a mechanical pulp and paper mill using advanced composite curves by Ruohonen et al [50].

Total sites have provided a considerable scope above the individual processes HI. Varbanov et al. [51] demonstrated the synthesis of a utility system (CHP network) of an industrial Total Site by applying a combination of targeting and mathematical programming techniques. Herrera, Islas, and Arriola studied a hospital complex that included an institute, a general hospital, a regional laundry centre, a sports centre, and some other public buildings. Bandyopadhyay et al. [53] presented options for targeting for cogeneration potential through total site integration. Recent overviews have been presented by Friedler [54,55].

How to Proceed with HI to Achieve a Credible Solution

Traditional Pinch Analysis assesses the minimum practical energy needs for a process through a systematic design procedure involving five steps:

(i) Collection of plant data

(ii) Setting targets for minimum practical energy requirements;

(iii) Examination of process changes that contribute to meeting the target;

(iv) Obtaining the minimum energy design that achieves the target and

(v) Optimisation which allows a trade-off between energy costs and capital costs.

The first issue is: How to start and to progress with a PI study? Kemp (2007) [23] summarized the steps, which had been further developed based on the authors’ experience. They are related to the HI, but could be used for mass/water integration:

1) Get familiar with the process. The efficient way is to closely liaise with the process designer (grassroots design) and/or plant manager (operating plants).

2) Mass and heat balance – based on the process flowsheet data, as well as calculations and/or measurements from the running plant (for a retrofit).

3) Select the streams. This is a key step and not as straightforward as it seems.

4) Remove all existing units related to the PI analysis. For HI remove all heat exchangers; for mass/water integration – all mass/water exchanging units. This is crucial – otherwise the optimised design would be the same as the initial.

5) Extract the stream data for the PI analysis. For HI thermal data are needed or for Water Integration – contaminants and water flowrates.

6) Select by a qualified guess/experience an initial value of ΔTmin for the heat integration. This can be later optimized at the various stages of the design.

7) Perform the Pinch Analysis, obtaining the Pinch location and the utility targets.

8) Design the initial HEN, starting with the maximum energy recovery.

9) Check for Cross-Pinch heat transfer and inappropriately placed utilities.

10) Check the placement of reactors, separation columns, heat engines and pumps.

11) Investigate the potential for the process modification for both energy minimisation and capital cost reduction. Investigate potential benefits of +/- principle and Keep Hot Stream Hot and Keep Cold Streams Cold principle.

12) Investigate the integration with the other processes - Total Site Analysis

13) Evaluate pressure drop effects (trade off between heat saving and pumping cost) and the lay-out implications – piping cost, heat and pressure drop losses.

14) Make the pre-selection of heat exchange equipment and preliminary costing. Provisions based on the assessment of the future energy prices are needed.

15) Perform an optimisation run of the pre-design plant/site adjusting ΔTmin

16) Based on the optimisation adjust and extract more precise data and return to 7) perform additional loops with screening and scoping, potential simplification.

17) Consider real plant constraints, including safety, technology limitations, controllability, operability and flexibility, availability and maintainability.

18) Very important for PI design are start-up and shut-down issues. Some early designed highly integrated plants suffered those problems.

19) Second optimisation run for the final tuning. If needed return to any appropriate previous step for adjustment.

20) The design is ready for detailing. However the optimisation is a never ending procedure – with changing operating conditions and/or economical environment the design should be re-optimised.

Data Extraction

The data extraction is a crucial step. It can be performed automatically [56] from simulation data. Several software packages have been offering this option, e.g. SuperTarget®. However, this has to be done carefully. Poor data extraction can easily lead to missed opportunities for improved process design. If the data extraction accepts all the features of the existing flowsheet then there will be no scope for improvement. Since 2000 the methodology has been further developed and attempts for the automatic data extraction were made, but the rules and experiences are still valuable. The basic rules are: (i) When a stream is a stream? (ii) How precise data we need at specific steps? (iii) How to handle considerable Cp changes? (iv) What is the further know-how (rules) for the data extraction? (v) How to calculate heat loads, capacities and temperatures of an extracted stream? (vi) How “soft” are the data in the flowsheet/plant? (vii) Where to find data for the capital and running cost?

When a stream is a stream?

This is one of the key issues for proper problem setup. Streams not gaining or providing heat should not be considered. This rule considerably simplifies the problem. There are also some streams which should not be included into the PI problem – e.g. for distance, safety, product purity, or operational reasons. When deciding which streams are going to be extracted, the following question should be answered: When a stream is a stream? Let us consider the example in Figure 1. It has been introduced by Linnhoff et al. [3] and has been used with some modifications in follow-up books [21,23] and in many courses based on UMIST/The University of Manchester teaching materials. It shows a part of a flowsheet in which the feed stream is heated by a recuperating heat exchanger to 45 °C and enters to a processing unit. After leaving this unit, the stream is heated again by two heat exchangers and enters a reactor. The reactor operation requires the feed stream to be at 160 °C. The options for how many streams we should extract are: (i) One from 10 °C to 160 °C; (ii) Two from 10 °C to 45 °C and 45 °C to 160 °C; or (iii) Three from 10 °C to 45 °C, 45 °C to 80 °C and 80 °C to 160 °C?

global-journal-technology-A-flowsheet-fragment

Figure 1: An example –A flowsheet fragment

If Option (iii) is applied, the resulting design would be exactly the initial one, having again three heat exchangers with identical heat duties. This is the case for which critics of the PI concluded that no improvement was obtained. Option (ii) offers more degrees of freedom – the first HE would be the same as in the current flowsheet but the rest of the design could be modified. Extracting two streams would be the case when the processing unit demands the feed temperature close to 80 °C. Option (i) would provide the most degrees of freedom and scope for improvement, but it requires that the processing unit feed could be at any temperature between the supply 10 °C and the target for the reactor 160 °C. If the processing unit is a filter as Smith [21] assumed, there would be some restriction on the filter supply temperature – for high temperatures the filter might experience a problem. If the processing unit is just storage, as Linnhoff et al. [3] assumed, the temperature restriction might be different. This simple example demonstrates that stream extraction can’t be fully automatic, but requires more assessment related to processing units and their performance.

How precise data are needed at specific steps?

This is a very frequent question. Many excuses for not performing PI analysis claim that a running plant has not got sufficiently precise data. PI starts with rough assumptions, which are further corrected in several loops. PI and initial optimisation are more about screening and scoping than detailed design. The goal is to get an answer to the question what potential for energy saving there is and in which direction the optimisation should proceed? If that potential is about 15 %, this is sufficient and it doesn’t matter too much if the precise figure would be 13 % or 17 %. In the regions close to the Pinch the data should be as precise as possible [3]. At the start the designer might have only a vague idea of where and at what temperature the Pinch will occur. The data extraction has to start from rough assessments and being corrected.

How to handle considerable Cp changes and the latent heat?

From Figure 1 it is obvious that phase changes are very likely to occur when the temperature rises from 10 °C to 160 °C. Also Cp is changing with the temperature. Just to use constant Cp would be unrealistic. To deal with this problem, a segmentation technique was developed. It has been used e.g. in STAR [57]. It is important how many segments to define and at which temperatures they should start and end. Each segment increases the complexity and should be kept at minimum.

What are the data extraction rules?

Some data extraction rules were introduced very early by Linnhoff et al. [3] and used with some modifications in follow-up books [17,21,23] and many courses based of UMIST teaching materials e.g. [58]. Most have been heat integration related, however, the principles can be analogically applied for the mass/water integration as well. The rules are as follows:

(i) Non-isothermal mixing. When two or more streams, with different temperatures, are mixed, this represents a heat exchange with a degradation of the higher temperature. It can also result in cross-Pinch heat transfer.

(ii) Heat losses. In most cases the heat losses are neglected. This is not correct for situations where streams are long or subject to very different temperatures. The solution is to introduce hypothetical coolers / heaters representing the losses.

(iii) Extracting utilities. The utilities should never be extracted from the existing plant or flow sheet. Such action would likely result in the same utility use and neglect more efficient options – e.g. utilities generation. However, attention should be paid that e.g. steam is not always a utility. In some cases is also used as process stream – an example is stripping steam in separation columns.

(iv) Generation of utilities. The HI analysis using the Grand Composite Curve may indicate valuable options for using otherwise wasted heat or cold to generate utilities. Many mistakes have been caused by just matching the evaporation and condensation lines without making provisions for the sensible heat segments (preheating and superheating, steam vapour cooling, condensate sub-cooling).

(v) Extracting at the effective temperature. In some cases a stream cannot be extracted directly as it still has to be used by a related process. E.g. a hot stream should be extracted at temperatures at which the heat becomes available. A good example has been presented by Smith [23] for a reactor using a quench liquid.

(vi) Forced and prohibited matches. There could be matches in a heat exchanger network, which should be either prohibited, e.g. for the danger of contamination, or those which must be secured. Software tools usually offer such an option. If not this can be secured by an appropriate penalty/bonus in the objective function.

(vii) Keeping streams separate only when necessary. If streams can be merged then it may be possible to eliminate some heat exchanging units.

How can the heat loads, heat capacities, and temperatures of an extracted stream be calculated?

When a stream has been extracted, next problem is how to calculate the heat related data. There are common engineering practices available for running plants as the measurements with the following data reconciliation [59,60]. Another option is to use a flowsheeting simulation model. These options are time consuming and at the early design stage the process structure is likely to evolve. For this reason it is possible to use a simplified approach based on the extracted data. The experience shows that at the initial stage they are sufficient.

How “soft” are the data in a flowsheet/plant

Inexperienced persons are trying to stick the temperatures shown in the PFD and then perform the PI analysis. This approach usually ends up overlooking many opportunities. It is better to question every temperature, discuss them with the plant designer/manager, and establish which temperatures are absolutely crucial to be achieved (“hard” data) while the rest (the “soft” data) can be compromised. In practice most data are in some way soft and this can be used beneficially. Streams leaving the plant are usually characterized by soft data and are suitable for optimisation via the +/- principle. Data softness is related to changing conditions and to flexibility, operability, and resilience.

Data for the capital and running cost?

The need to find cost data arises when the appropriate ΔTmin should be selected. The optimum ΔTmin depends on economic parameters. Estimating capital cost is time consuming. They are approximate methods [61] for the initial stage when little is known about the design and materials required or the temperature, pressure and composition of streams. Equipment cost may vary regionally and may be related to market conditions. It is difficult to estimate operating cost, which is affected by labour, taxation, and is mainly a function of energy cost. A potential pitfall is using the current price of energy. It is better to use the anticipated average energy price for the life span of the plant; in case of retrofit – for the expected payback period. In a number of works this rule was not followed. The question is then where to find energy price projections for the time within five or ten years? Even the forecasts from qualified institutions were not fulfilled. One of the potential approaches is to use scenarios and target the most flexible design which would provide a balanced optimum for various situations.

Integration of renewable – fluctuating demand and supply

Renewable availability varies significantly with time and location. The energy demands of sites vary significantly with time of the day and period of the year. The advanced PI methodology using the time as another problem dimension is a potential solution to deal with this problem. A basic methodology has been developed previously for HI of batch processes – Time Slice and Time Average Composite Curves [62]. It has been recently revisited by Foo et al. [63]. A novel approach has been extending the HI of renewables by Perry et al. [64] in 2008 and by Varbanov and Klemeš in 2010 [65]. Dealing with variation and fluctuation brought another complexity into data extraction. Important is the specification of the time intervals - Time Slices. [66].

Results Interpretation

Beside data extraction, correct interpretation of the results is a very important step in PI analysis and optimisation. The results are usually presented by a printout and in most cases by a Grid diagram or PFD supported by tables. Many software tools developed an interface for transferring the extracted data to minimise misinterpretations.

A difficult part is the results assessment and possible further development/correction from the viewpoint of the process technology. It depends on the issues of data uncertainty, data “softness”, flexibility, operability, controllability, safety, availability and maintenance. It is advisable not to stick with one solution, but to explore different scenarios related to various operating conditions and to test the sensitivity of the design.

Conclusions

Even when a sustainable and near optimum design is developed it still has to be put into practice. This involves selling the projects, which could be in many cases unconventional to the investors and contractors. This used to be a problem at the beginning of the PI history. PI and HI especially have since proven themselves as a very powerful and efficient tool and gained in popularity and decision makers have become more receptive. Among pioneers have been the members of UMIST and later The University of Manchester “PI Research Consortium”. In this century the fast development has spread out to the other parts of the world, namely to Asia and Malaysia is one of the forefront countries [67,68,69,70].

It is important to point out that developing credible applications and extension of PI to new areas has always been the strongest driver for developing the methodology further. Examples of novel applications can be found in milk processing – Atkins et al. [71], biorefineries [72], and phosphorous industry [73]. A recent example of extending PI to a new field has been the work by Lam et al. [74] on regional renewable resources planning.

A recent work demonstrated the control and operability of heat exchangers [75]. However, the close and smooth joint effort and collaboration amongst the PI specialists, plant designers, plant management, operators, control engineers, and the owners/contractors is still a major issue.

References

Select your language of interest to view the total content in your interested language
Post your comment

Share This Article

Article Usage

  • Total views: 11624
  • [From(publication date):
    June-2012 - Nov 19, 2017]
  • Breakdown by view type
  • HTML page views : 7857
  • PDF downloads : 3767
 

Post your comment

captcha   Reload  Can't read the image? click here to refresh

Peer Reviewed Journals
 
Make the best use of Scientific Research and information from our 700 + peer reviewed, Open Access Journals
International Conferences 2017-18
 
Meet Inspiring Speakers and Experts at our 3000+ Global Annual Meetings

Contact Us

Agri & Aquaculture Journals

Dr. Krish

[email protected]

1-702-714-7001Extn: 9040

Biochemistry Journals

Datta A

[email protected]

1-702-714-7001Extn: 9037

Business & Management Journals

Ronald

[email protected]

1-702-714-7001Extn: 9042

Chemistry Journals

Gabriel Shaw

[email protected]

1-702-714-7001Extn: 9040

Clinical Journals

Datta A

[email protected]

1-702-714-7001Extn: 9037

Engineering Journals

James Franklin

[email protected]

1-702-714-7001Extn: 9042

Food & Nutrition Journals

Katie Wilson

[email protected]

1-702-714-7001Extn: 9042

General Science

Andrea Jason

[email protected]

1-702-714-7001Extn: 9043

Genetics & Molecular Biology Journals

Anna Melissa

[email protected]

1-702-714-7001Extn: 9006

Immunology & Microbiology Journals

David Gorantl

[email protected]

1-702-714-7001Extn: 9014

Materials Science Journals

Rachle Green

[email protected]

1-702-714-7001Extn: 9039

Nursing & Health Care Journals

Stephanie Skinner

[email protected]

1-702-714-7001Extn: 9039

Medical Journals

Nimmi Anna

[email protected]

1-702-714-7001Extn: 9038

Neuroscience & Psychology Journals

Nathan T

[email protected]

1-702-714-7001Extn: 9041

Pharmaceutical Sciences Journals

Ann Jose

[email protected]

1-702-714-7001Extn: 9007

Social & Political Science Journals

Steve Harry

[email protected]

1-702-714-7001Extn: 9042

 
© 2008- 2017 OMICS International - Open Access Publisher. Best viewed in Mozilla Firefox | Google Chrome | Above IE 7.0 version
adwords