Meetings
From CrewWiki
(→ICWG Workshop 1) |
|||
Line 1: | Line 1: | ||
== ICWG Workshop 1 == | == ICWG Workshop 1 == | ||
- | The first Workshop of the ICWG (previous referred to as CREW 5) will be held from ''' | + | The first Workshop of the ICWG (previous referred to as CREW 5) will be held from '''17 till 20 May 2016''' in '''Lille, France, Europe'''. More details on this workshop will follow later. |
== CREW 4 == | == CREW 4 == |
Revision as of 13:34, 6 May 2015
Contents |
ICWG Workshop 1
The first Workshop of the ICWG (previous referred to as CREW 5) will be held from 17 till 20 May 2016 in Lille, France, Europe. More details on this workshop will follow later.
CREW 4
The Fourth Cloud Retrieval Evaluation Workshop (CREW 4) was held from 4-7 March 2014 in Grainau, Germany, Europe. The Workshop was hosed by DWD and co-sponsored by Eumetsat. The workshop aimed at enhancing cloud retrieval schemes and their applicability and a better characterization of their validity. We invited experts working with cloud parameter retrieval schemes from passive imagers (e.g. METEOSAT, AVHRR and MODIS), passive microwave (e.g. AMSR), and active lidars and radars observations (e.g. CloudSat, CALIPSO) to participate in the workshop and to contribute to the cloud parameter inter-comparison and validation activity that is connected to the workshop.
CREW-4 Program as pdf
CREW-4 Abstracts as pdf
CREW-4 List of participants as pdf
CREW-4 Handouts
CREW-4 Oral presentations
CREW-4 Poster presentations
CREW-4 Group photo
CREW-4 First and Second announcement as pdf
CREW 3
The Third Cloud Retrieval Evaluation Workshop (CREW 3) was held from 15-18 November 2011 in Madison, Wisconsin, USA. The Workshop was hosted by the University of Wisconsin-Madison and co-sponsored by Eumetsat. The workshop was attended by experts working with operational cloud parameter retrieval schemes from passive imaging sensors (SEVIRI, AVHRR, and MODIS), passive microwave sensors (AMSR), and active lidar and radar sensors (CLOUDSAT, CALIOP), who contributed to the cloud parameter inter-comparison and validation campaign that is connected to the workshop. More detailed information about the conference premises can be found at http://www.ssec.wisc.edu/crew/index.html.
CREW-3 Program as pdf
CREW-3 Abstracts as pdf
CREW-3 List of participants (71 participants)
CREW-3 Meeting Summary in GEWEX newsletter Feb 2012, p. 14-16
CREW-3 Meeting Summary in BAMS
CREW-3 Oral presentations (only for registered CREW participants, please email us for registration instructions)
CREW-3 Photos
CREW-3 Second announcement as pdf
CREW 2
The second Cloud Retrieval Evaluation Workshop took place in Locarno, Switzerland from 3 - 5 February 2009.
CREW 2 went a step further than the first workshop. Additionally to the inter-comparison of SEVIRI algorithms, cloud parameters were also compared to data sets of the ATRAIN satellites CPR on CLOUDSAT, CALIPSO on CALIOP as well as MODIS and AMSR-E. CLOUDSAT is a satellite-based radar that provides a vertical view into clouds along its path. It is well suited to compare cloud top heights. CALIPSO, as a lidar sensor, is more likely to detect also thin cirrus clouds. AMSR-E is a microwave sensor and is used to compare the liquid water path over ocean from measurements in a different spectral region. MODIS has similar channel settings as SEVIRI and has a wide range of atmospheric products. In this way we gained more knowledge on the behaviour of the different retrieval schemes for different cloud conditions.
CREW-2 Program
CREW-2 Program and Abstracts as pdf
CREW-2 List of participants (44 participants)
CREW-2 Workshop report(only for registered CREW participants, please email us for registration instructions)
CREW-2 Oral presentations (only for registered CREW participants, please email us for registration instructions)
CREW 1
The first Cloud Retrieval Evaluation Workshop took place in Norrköping, Sweden from 17-19 May 2006. CREW 1 gave a good overview on the available retrieval schemes, their applications and their differences. The idea was to present the output of the algorithms not only in individual presentations, but also to have an independent and objective inter-comparison of the data. For this purpose all participants provided the results of their algorithms for one observation day. The analysis for this first workshop was limited to inter-comparison of SEVIRI algorithms.
CREW-1 List of participants (23 participants)
CREW-1 Workshop report