|About The Project|
Mission | top of page
Currently, there is no clear sense of how most efforts to introduce technology in schools are faring, individually or collectively, nor is there a rigorous understanding of the factors that are correlated with success or failure. The mission of this project seeks to begin to address these concerns.
Background Information | top of page
In the developing world, where there has been increasing pressure to "catch up" to the more developed countries, the addition of ICTs in schools has become a central element of national and sub-national education policy and practice. Ministries of education, public, state, and municipal-level school authorities as well as private school organizations have invested substantial amounts of money, time and energy in the goal of preparing their youth to participate successfully in a technology-savvy workplace.
Yet in spite of the vast effort being made to integrate computers and the Internet with education, there is still a poor understanding of how these technologies are being used, and even less comprehension of how ICTs have impacted learning. In the more developed regions of North America, industrialized Asia, Australia and the European Union, there has been greater emphasis on evaluation of school technology programs, and a cloudy (and not uniquely positive) picture of effectiveness of approach and implementation is beginning to appear.
School leaders and policymakers alike are left with limited data about the use and impact of ICTs in their systems and face corresponding difficulties in planning and measuring their progress towards effective integration of ICTs. The Global Networked Readiness in Education project seeks to aid school leaders and policymakers by helping them move beyond reliance on certain basic inputs as "predictors" (for example, student-computer ratios or hours of training) to examining what is actually happening inside the system (user experiences and their interaction with "predictors").
Goals | top of page
The Toolkits developed as part of this project seek to address the intermingled challenges of rapidly changing technological environment, increasing hopes and expectations for technology use in education, and deficits of data to understand the impact of this integration. The Global Networked Readiness for Education Project has three main goals:
The project owes a great deal of gratitude to local in-country coordinators who were asked to select participating schools, to deploy surveys, to collect and sometimes enter data into a web-based survey, and to perform additional administrative and data tracking tasks. These coordinators were volunteers from various non-profit organizations, operating with the consent of participating schools, as well as the Ministry of Education in some cases. Data collection occurred from September – November, 2003.
COUNTRIES: Eleven developing countries including Brazil, Costa Rica, El Salvador, the Gambia, India (state of Karnataka), Jordan, Mexico, Panama, the Philippines, South Africa, and Uganda participated in the study. They were selected based on a combination of characteristics including geography (three African, one Middle Eastern, two Asian and four Latin American nations), income, language, population, ICT-education activity at the secondary education level, and presence of on-the-ground contacts.
SCHOOLS: Country coordinators developed a list of potential participant schools according to project guidelines, and determined the final participants in collaboration with US-based coordinators. Schools were required to have computers and preferably Internet connectivity (or at least had it in the recent past – access often fluctuates due to funding, technology and electricity issues), and offer secondary education. The sample was selected such that schools had varied incomes, sizes and geographic locations; included government and some private institutions; and had different levels of ICT experience, programmatic approaches and priorities. The schools surveyed had between 100 and more than 2000 students, less than a year to over 18 years of computer experience, and anywhere from one to more than 40 computers.
RESPONDENTS: Respondents groups included students, teachers, lab supervisors and heads of school. In selecting participants from the student and teacher populations, coordinators were asked:
DEVELOPMENT OF SURVEY TOOL QUESTIONS: Given the diversity of the sample population, there were many challenges in designing questions that were understood across cultures and languages. The questions were written to maximize the use of common terms with precise meanings. They were originally developed in English with close collaboration between the teams at Harvard and the World Bank Institute. They were then pilot-tested in the field, and were also reviewed by ICT-education experts from a variety of nations and professional disciplines. Based on the feedback received, we made significant changes to the survey’s composition, wording, question order, response options, and physical layout. Once the surveys were finalized they were then translated into Spanish, Portuguese and Kannada (native language of the state of Karnataka, India). and PDF files were made available online.
DEPLOYMENT OF SURVEYS: The country coordinators distributed surveys as a hard copy to each respondent. The respondents were asked to fill in the hard copies, and then enter the results using the survey’s Web interface. The HTML and PDF versions were identical in layout and are available in English, Portuguese and Spanish, while the Kannada version for India was only available in hard copy. On-site bilingual assistance was given in Karnataka, India (Kannada/English), Jordan (Arabic/English) and The Philippines (Tagalog/Cebuana/English) when the surveys were filled out. This assistance proved to be very helpful and valuable especially for students, who occasionally had difficulties understanding and/or contextualizing some questions. Where it was deemed inadvisable for whatever reason for respondents to enter them directly (usually due to poor connectivity or insufficient computers), the hard copies were collected and entered by the coordinator or someone appointed by him or her. The paper copies were subsequently returned to the research team.
DATA COLLECTION: Each new survey entered on the web was given a unique code that allowed an interrupted survey to be re-initiated without data loss, and supported security, testing and tracking. The electronically entered data was collected into a common database for further analysis.Partner Organizations | top of page
The following partners have been instrumental in the success of this pilot project:
In addition, we would like to thank the numerous individuals and organizations that provided valuable input, advice and logistical support to the development of this toolkit, including the SEED (Schlumberger Excellence in Educational Development) program, iEARN, World Computer Exchange, WIDE World at the Harvard Graduate School of Education, and many others.