Google

Sunday, December 30, 2007

History of cricket






















History of cricket







The game of cricket has a known history spanning from the 16th century to the present day, with international matches played since 1844, although the official history of international Test cricket began in 1877. During this time, the game developed from its origins in England into a game which is now played professionally in most of the Commonwealth of Nations.
Derivation of the name of "cricket"
A number of words are thought to be possible sources for the term cricket, which could refer to the bat or the wicket. In old French, the word criquet meant a kind of club which probably gave its name to croquet. Some believe that cricket and croquet have a common origin. In Flemish, krick(e) means a stick, and, in Old English, cricc or cryce means a crutch or staff (though the hard "k" sound suggests the North or Northeast midlands, rather than the Southeast, where cricket seems to have begun). The Isle of Man has a game called Cammag. It involves a stick (cammag) and a ball (crick) with anything between four and hundreds of players. The 'crick' in this instance may be derived from, though indirectly, Flemish. Alternatively, the French criquet apparently comes from the Flemish word krickstoel, which is a long low stool on which one kneels in church which may appear similar to the long low wicket with two stumps used in early cricket.






Early Seventeenth Century
A number of references occur up to the English Civil War and these indicate that it had become an adult game contested by parish teams, but there is no evidence of county strength teams at this time. Equally, there is little evidence of the rampant gambling that characterised the game throughout the 18th century. It is generally believed, therefore, that "village cricket" had developed by the middle of the 17th century but that county cricket had not and that investment in the game had not begun.
Gambling and press coverage
Cricket certainly thrived after the Restoration in 1660 and is believed to have first attracted gamblers making large bets at this time. In 1664, the "Cavalier" Parliament passed a Gambling Act which limited stakes to £100, although that was a fortune. Cricket had certainly become a significant gambling sport by the end of the 17th century. We know of a "great match" played in Sussex in 1697 which was 11-a-side and played for high stakes of 50 guineas a side. Our knowledge of this game came about because, for the first time, cricket could be reported in the newspapers with freedom of the press having been granted the previous year. But it was a long time before the newspapers adapted sufficiently to provide frequent, let alone comprehensive coverage of the game.




Cricket moves out of England
Cricket was introduced to North America via the English colonies in the 17th century, probably before it had even reached the north of England. In the 18th century it arrived in other parts of the globe. It was introduced to the West Indies by colonists and to India by British East India Company mariners in the first half of the century. It arrived in Australia almost as soon as colonisation began in 1788. New Zealand and South Africa followed in the early years of the 19th century.





Development of the Laws
The basic rules of cricket such as bat and ball, the wicket, pitch dimensions, overs, how out, etc. have existed since time immemorial. In 1727, we first hear of "Articles of Agreement" to determine the code of practice in a particular game and this became a common feature, especially around payment of stake money and distributing the winnings given the importance of gambling. In 1744, the Laws of Cricket were codified for the first time and then amended in 1774, when innovations such as lbw, middle stump and maximum bat width were added. These law stated that 'the principals shall choose from amongst the gentleman present two umpires who shall absolutely decide all disputes.' The codes were drawn up by the so-called "Star and Garter Club" whose members ultimately founded MCC at Lord's in 1787. MCC immediately became the custodian of the Laws and has made periodic revisions and recodifications subsequently.



Cricket and crisis
Cricket faced its first real crisis at the beginning of the 19th century when major matches virtually ceased during the culminating period of the Napoleonic Wars. This was largely due to shortage of players and lack of investment. But the game survived and a slow recovery began in 1815. Then cricket faced a crisis of its own making as the campaign to allow roundarm bowling gathered pace. The game also underwent a fundamental change of organisation with the formation for the first time of county clubs. All the modern county clubs, starting with Sussex, were founded during the 19th century. No sooner had the county clubs established themselves than they faced what amounted to "player action" as William Clarke created the travelling All-England Eleven in 1846. Other similar teams were created and this vogue lasted for about thirty years. But the counties and MCC prevailed. The growth of cricket in the mid and late 19th century was assisted by the development of the railway network. For the first time, teams from a long distance apart could play one other without a prohibitively time-consuming journey. Spectators could travel longer distances to matches, increasing the size of crowds.

International cricket begins
The first ever international cricket game was between the USA and Canada in 1844. The match was played at Elysian Fields, Hoboken, New Jersey In 1859, a team of leading English professionals set off to North America on the first-ever overseas tour.
In 1864, another bowling revolution resulted in the legalisation of overarm. The "Great Cricketer", W G Grace, made his debut the same year. In 1877, an England touring team in Australia played two matches against full Australian XIs that are now regarded as the inaugural Test matches. The following year, the Australians toured England for the first time and were a spectacular success. No Tests were played on that tour but more soon followed and, at The Oval in 1882, arguably the most famous match of all time gave rise to The Ashes. South Africa became the third Test nation in 1889.
Balls per over
In 1889 the immemorial four ball over was replaced by a five ball over and then this was changed to the current six balls an over in 1900. Subsequently, some countries experimented with eight balls an over. In 1922, the number of balls per over was changed from six to eight in Australia only. In 1924 the eight ball over was extended to New Zealand and in 1937 to South Africa. In England, the eight ball over was adopted experimentally for the 1939 season; the intention was to continue the experiment in 1940, but first-class cricket was suspended for the Second World War and when it resumed, English cricket reverted to the six ball over. The 1947 Laws of Cricket allowed six or eight balls depending on the conditions of play. Since the 1979/80 Australian and New Zealand seasons, the six ball over has been used worldwide and the most recent version of the Laws in 2000 only permits six ball overs.
Growth of Test cricket
When the Imperial Cricket Conference (as it was originally called) was founded in 1909, only England, Australia and South Africa were members. But that would soon change, and India, West Indies and New Zealand became Test nations before the Second World War and Pakistan soon afterwards. The international game grew with several "affiliate nations" getting involved and, in the closing years of the 20th century, three of those became Test nations also: Sri Lanka, Zimbabwe and Bangladesh.
Test cricket remained the most popular form of the sport throughout the 20th century but it had its problems, never more so than in the infamous "Bodyline Series" of 1932/33 when Douglas Jardine's England used so-called "leg theory" to try and neutralise the run-scoring brilliance of Australia's Don Bradman.
Suspension of South Africa (1970-1991)
The greatest crisis to hit international cricket was brought about by apartheid, the South African policy of racial segregation. The situation began to crystallise after 1961 when South Africa left the Commonwealth of Nations and so, under the rules of the day, its cricket board had to leave the International Cricket Conference (ICC). Cricket's opposition to apartheid intensified in 1968 with the cancellation of England's tour to South Africa by the South African authorities, due to the inclusion of "coloured" cricketer Basil D'Oliveira in the England team. In 1970, the ICC members voted to suspend South Africa indefinitely from international cricket competition. Ironically, the South African team at that time was probably the strongest in the world. Starved of top-level competition for its best players, the South African Cricket Board began funding so-called "rebel tours", offering large sums of money for international players to form teams and tour South Africa. The ICC's response was to blacklist any rebel players who agreed to tour South Africa, banning them from officially sanctioned international cricket. As players were poorly remunerated during the 1970s, several accepted the offer to tour South Africa, particularly players getting towards the end of their careers for whom a blacklisting would have little effect. The rebel tours continued into the 1980s but then progress was made in South African politics and it became clear that apartheid was ending. South Africa, now a "Rainbow Nation" under Nelson Mandela, was welcomed back into international sport in 1991.
World Series Cricket
The money problems of top cricketers were also the root cause of another cricketing crisis that arose in 1977 when the Australian media magnate Kerry Packer fell out with the Australian Cricket Board over TV rights. Taking advantage of the low remuneration paid to players, Packer retaliated by signing several of the best players in the world to a privately run cricket league outside the structure of international cricket. World Series Cricket hired some of the banned South African players and allowed them to show off their skills in an international arena against other world-class players. The schism lasted only until 1979 and the "rebel" players were allowed back into established international cricket, though many found that their national teams had moved on without them. Long-term results of World Series Cricket have included the introduction of significantly higher player salaries and innovations such as coloured kit and night games.
Limited overs cricket
In the 1960s, English county teams began playing a version of cricket with games of only one innings each and a maximum number of overs per innings. Starting in 1963 as a knockout competition only, limited overs grew in popularity and in 1969 a national league was created which consequently caused a reduction in the number of matches in the County Championship. Although many "traditional" cricket fans objected to the shorter form of the game, limited overs cricket did have the advantage of delivering a result to spectators within a single day; it did improve cricket's appeal to younger or busier people; and it did prove commercially successful. The first limited overs international match took place at Melbourne Cricket Ground in 1971 as a time-filler after a Test match had been abandoned because of heavy rain on the opening days. It was tried simply as an experiment and to give the players some exercise, but turned out to be immensely popular. Limited overs internationals (LOIs or ODIs, after One-day Internationals) have since grown to become a massively popular form of the game, especially for busy people who want to be able to see a whole match. The International Cricket Council reacted to this development by organising the first Cricket World Cup in England in 1975, with all the Test playing nations taking part.
21st Century cricket
Cricket now is arguably the second most popular sport in the world. In June 2001, the ICC introduced a "Test Championship Table" and, in October 2002 a "One-day International Championship Table". Australia has consistently topped both these tables since they were first published. Cricket remains a major world sport and is the most popular spectator sport in the Indian subcontinent. The ICC has expanded its Development Program with the goal of producing more national teams capable of competing at Test level. Development efforts are focused on African and Asian nations; and on the United States. In 2004, the ICC Intercontinental Cup brought first class cricket to 12 nations, mostly for the first time. Cricket's newest innovation is Twenty20, essentially an evening entertainment. It has so far enjoyed enormous popularity and has attracted large attendances at matches as well as good TV audience ratings. The inaugural ICC Twenty20 World Cup tournament will be held in 2007.
The future
The USA has long been seen as a promising market for cricket, but it has been difficult to make any impression on a public largely ignorant of the sport. The establishment of the Pro Cricket professional league in America in 2004 did little to broach this last frontier, though the game continues to grow through immigrant groups. China may also be a source of future cricket development, with the Chinese government announcing plans in 2004 to develop the sport, which is almost unknown in China, with the ambitious goals of qualifying for the World Cup by 2019 and becoming a Test Nation. Despite the disproportionate publicity (in the cricket press at least) given to developments in the USA, the next major cricket nation is likely to be from South Asia. The game is already very popular in Nepal and Afghanistan, and results in competitions such as the under 18 world cup and the ACC trophy suggest these teams are not short of natural talent. Secondly, the ICC is conducting ongoing reviews of the interpretation of Law 24.3 of the Laws of Cricket:

Fakir Mohan Senapati


Fakir Mohan Senapati

Fakir Mohan Senapati (1843-1918) lived during tumultuous times. Orissa was taken over by the British in 1803, and was soon thereafter incorporated into a transnational economic system. Senapati`s consciousness of being an Oriya developed in a politicized context where an Oriya cultural identity (like many other minority identities in history) was at risk of disappearing. What drove him was less a desire for literary fame than the need to save and protect the language of the people around him.

Fakir Mohan Senapati was born in a Khandayat family in a small village ‘Malli Kashapur’ near Balasore town. His father died when he was at the age of one year and five months only. His mother also died after 14 months of the death of his father. His grandmother brought him up. He used to remain ill when he was an infant. His grandmother usually took him to the fakirs. His name was originally Brajamohan. His grandmother changed his name to Fakir as a dedication to the Fakirs she used to take him.

He took up his education from Barabati School of Balasore passing as a minor. Due to his poor financial condition he could not study further. At the age of eighteen he worked as a teacher in the Brabati School for a salary of Rs.2.50 per month. He also served in the collectorate of Balasore as a clerk for a short period. He later took up as a teacher in the Mission School of Balasore till1871. Besides being a teacher he was also devoted towards gaining wisdom. He also took part in the discussions of English, Sanskrit and Bengali literature and was able to prove himself a pundit. During this period he was acquainted with John Bims the then Collector of Balasore, also a pundit who used to write comparative grammar of different Indian languages. He took the help of Fakir Mohan to learn Oriya. John Bims getting well acquainted with Fakir Mohan’s caliber insisted him to be the “Diwan” of Nilagiri. He served as the Diwan of Nilagiri from 1871 to 1875 at a monthly salary of Rs. 1, 000/- after which he served as “Diwan” at various places of Orissa. He was Diwan at Damapada from 1876 to 1877, at Dhenkanal from 1877 to 1883, at Daspalla from 1884 to 1886, at Pallahada from 1886 to 1887, at Keonjhar from 1887 to 1892, at Damapada for the second time from 1894 to 1896. He stayed at Cuttack from 1896 to 1905 and was attached with various institutions of literature. He spent his last time at Balasore till death. He was married in the year 1856. He married for the second time in 1871 after the death of his first wife. In the year 1877 there was sad demise of his six-year-old son. He was blessed with a second son in1881. He lost his wife in the year 1894 who died of diarrhea. His marital life was not smooth and full of sorrows.

There was a measure of idealism that inspired him, no doubt, but Senapati had a very clear idea of the strategic interests of the various groups at stake. He understood clearly that the future of at least the Oriya middle-class was bleak if Bengali instead of Oriya became the official medium of communication in Orissa. Senapati`s concern with language as a social force - its seductive power, its authority, its abuses - clearly grew out of the struggles into which he had been thrust early in his life, the struggles to defend and save a language and a culture.

Fakir Mohan Senapati was intellectually restless and adventurous, and had the spirit of a reformer more than that of a writer in search of literary fame. He grew up in a part of colonial India that barely registered in the consciousness of the Viceroys and their officials. But it is from this particular vantage point that he created a unique synthesis of the traditional and the contemporary, a synthesis whose power and example are relevant even today. Senapati`s critique was never merely negative; it was based on a vision of human equality and cultural diversity, of a radical humanism that was fed by a variety of religious traditions.

He established a press at Balasore in the year 1868 by the name “Utkal Press”. He published various magazines from time to time in the name of “Bodhadayini”, “Nabasambad” and “Sambad Bahika”. He was the President of the annual conference of “Utkal Sahitya Samaj” held in 1912. He went to visit ‘Satyabadi School’ in1915 and was very much impressed of the works of Gopabandhu. The annual conference of Utkal Sahitya Samaj was held at Cuttack in March 1917 where he was the ‘President’. He gave birth to the modern Oriya literature. He wrote many stories, novels and poetry. His autobiography was Orissa’s first and foremost admirable self-character. He was the key creator of modern Oriya short stories and novels. He was one of the main leads among the other authors who wrote against the cruelty of British rule. He revolted against the conspiracy to suppress the Oriya language, which was going on at the time. Due to his steps for the establishment of Oriya press, publishing of Oriya books, newspaper and literature magazine, Oriya language could survive which would otherwise have been a history forever. His literature consisted completely of pure Oriya and melodrama, his unscathed image in the society and his revolutionary social ideology. He was an eminent laureate, writer, poet, editor, critic and administrator. He is remembered as “Vyasa Kabi”. His contribution to Oriya language and literature is unforgettable.

Fakir Mohan`s sense of humor and irony have remained unsurpassed in Oriya literature and it is his characteristic style which made him popular with a wide range of readers. He believed that Faith, Asceticism, Love and Devotion were four pillars that formed the base of "Dharma". His faith was derived from Islam, asceticism from Buddhism, love from Christianity and devotion from Vaishnavism.

Fakir Mohan completely discarded the traditional theme of romantic love between prince-princesses and wrote about common people and their problems in his novels. In contrast to the Sanskritised style of his contemporaries, he also used colloquial idiomatic Oriya in his writings with great skill and competence. If the works of earlier novelists seemed like prose renderings of medieval kavyas, Fakir Mohan`s novels were realistic to the core. He can be favorably compared with 20th century novelists like Premchand and Bibhutibhusan Banerjee.

Fakir Mohan is considered as the greatest prose writer in Oriya literature. But it is amazing to note that he hardly wrote any prose until he retired from administrative service. He translated Ramayana, Mahabharata and some of the Upanishads from the original Sanskrit for which he is popularly known as "Vyasa Kavi". He wrote poetry too, but the themes of his poems were not considered conventionally fit material for poetry. He used colloquial, spoken and rugged language of the common man, which no poet in Oriya had done for centuries. Fakir Mohan wrote four novels, two volumes of short stories and one autobiography. Besides that, he mastered the art of writing short stories for which he is also termed as Katha Samrat (Emperor of Shortstories) in Oriya literature.

Tuesday, December 18, 2007

ENTREPRENEURSHIP FOR TECHNICAL STUDENTS

ENTREPRENEURSHIP FOR TECHNICAL STUDENTS

Introduction

In today's business economy a significant number of technical students are pursuing careers in technology entrepreneurial firms. The Engineering school offer an extensive curriculum in engineering and science and students graduating from these programs are extremely well grounded in their technical field of specialization. Unfortunately, these students have no access to managerial concepts associated with new venture creation despite their strong interests in this area.

To help prepare engineering and science students for careers in entrepreneurial organizations the Business School of Management in conjunction with the schools of Engineering and Science is pleased to offer the following entrepreneurial management curriculum focusing on providing an introduction to entrepreneurship for technical students.

They provide materials on business practices, opportunity recognition, entrepreneurial finance, entrepreneurial marketing and intellectual property. A key component of the program is a detailed examination of an actual technological innovation.

METHODOLOGY

As universities have embraced entrepreneurship education for their students in engineering and the sciences, how are these schools offering entrepreneurship to these students? What models of introducing engineering and science students to the principles and practice of entrepreneurship are currently in use? What role have key factors played in the development of these initiatives? These are the questions that we sought to answer in undertaking the current study.

The relative lack of information available about our research question led us to choose a qualitative, descriptive method. Multiple case studies using within-case and cross-case analysis are an appropriate method to provide description; this methodology usually employs a number of data-gathering techniques. To elicit the models and influential factors, we used a multiple case-study methodology. We collected our data with a combination of techniques such as site visits, review of internal documents, in-person and telephone interviews, and a follow-up survey.

The sample for this study was one of convenience, constructed in the following way: First, we evaluated the ten founding members of the National Consortium of Entrepreneurship Centers. Of the ten, we omitted five institutions-two non-university programs and three campuses without significant engineering and science programs. They are geographically diverse, each has a reputation for engineering and the sciences and each has a formalized center or program for entrepreneurship.

MAJOR FINDINGS

A. Models

In reviewing internal documents and site-visit notes, it became clear that five categories of actions define entrepreneurship education in general: (1) developing intellectual content, including scholarly research; (2) gaining institutional acceptance, with attention to curricular, structural and fiscal issues; (3) engaging students and alumni; (4) building relationships with the business community; and (5) showcasing success. We used this framework as the starting point for discussing different ways to conceptualize models of implementation.

The first action area follows from a simple premise: Entrepreneurship can be taught. Students can learn to recognize opportunities, to gather and deploy resources, and to create and harvest businesses. Further, entrepreneurship has a legitimate place in academic life as the subject of research. While the degree to which different universities focus on research activities varies (in comparison to teaching or outreach programs), each seeks a foundation of intellectual credibility. The focus of a given program is linked to the composition of the faculty, whether tenured/tenure-track or clinical or adjunct or a combination.

Secondly, none of the centers or initiatives in this study would exist without the work of either a single individual or small group of people. These "champions" are themselves entrepreneurs in that they recognized the opportunity that technological entrepreneurship education represents and sought ways to make it a reality at their institutions. These "champions" may advocate institutional acceptance in the curriculum arena, in the structure of the program, in financing, or in some combination of these areas.

Engaging students and alumni is the third guiding category of action for entrepreneurship education. Encouraging current students to enroll in entrepreneurship courses is important to growing programs. Alumni are essential as guest speakers and are often helpful for internship placements and direct financial support.

The fourth key area is building institutional relationships with the business community, including venture capitalists. For instance, every university in the study makes use of an advisory board composed of entrepreneurs and business leaders. These serve as a bridge between the university and the business community. Internships and continuing education courses are other "bridge" initiatives.

Lastly, the technological entrepreneurship programs we studied use the Internet, publications and special events to showcase their success. All six universities maintain a Web site that describes the activities of their program or center.

Each of the universities follows one of three models, which differ on the dimensions of location within the university, organizational design and approach to attracting students. To keep these models general, we avoided making subdivisions based on type of faculty: adjunct, clinical, tenured and tenuretrack, or a combination. Nor did we make separations based on whether courses count toward elective credits, a certificate or a minor. Follow-up exchanges led us to propose three models rather than the initial four.

For the Model A universities, the entrepreneurship initiatives for engineering and science students are based in or emanate from the business school, which are offers a structured technological entrepreneurship curriculum for undergraduates-either a two-course sequence or a four-course concentration. These courses are designed to act as a magnet, pulling engineering and science students out of their respective schools. These courses are same as MBA students.

The number of engineering and science students enrolled in entrepreneurship courses depends upon the university and the specifics of its initiative, an internal compensation system does not fund the Center for Entrepreneurship for educating non-business students. A remedy for this limit on course availability for engineering and science students is expected soon. In some universities, the university has a Ventures Program with a gatekeeping function that monitors and controls the number of engineering and science students who are permitted to enroll in entrepreneurship courses within the business School.
At the universities with a multi-school approach (Model C), this study shows that the balance tilts toward one school in the partnership. Currently, just 25-33 percent of the students in a given entrepreneurship course are engineering students, less than the number that the faculty in the College of Business would like to see.

As one would expect, these models are continually evolving. At University (Model B), for example, there is evidence of cross-pollination between the engineering and business schools. This winter, one MBA course reserved one-quarter of class seats for non-business students. Outside the classroom, engineering students participate in BASES, the Business Association for Engineering Students, which sponsors engineering and business school student dinners, events, and job fairs.

B. Goals

The universities participating in this study reported that teaching was a primary goal of their technological entrepreneurship initiatives, particularly those targeted to undergraduates. In addition, The universties creating new ventures and economic development are important objectives for their efforts. At universities this goal applies largely to its graduate and continuing education initiatives.

The range of responses on the primacy of research indicates some potential ambiguity in interpretation of our interview question. We asked: "At this time, what goals drive [university's] technical entrepreneurship initiative (e.g., new business creation, teaching, outreach, research)?" If respondents viewed this question from the level of the university, research would likely play an important role in curriculum-and legitimacy building. Rensselaer, for example, cited the importance of building a research faculty for its technological entrepreneurship program. However, if the frame of reference is instead the entrepreneurship center or the specific activity of linking engineering students to courses, research may be perceived as less critical.

C Factors

A final key finding of the current study is a synthesis of the factors that influenced the direction of technological entrepreneurship at our six participating universities. We asked a representative from each university these questions: "What circumstances have aided the development of your university's technological entrepreneurship initiative? What circumstances have been barriers?" Later, to confirm and summarize the responses we gathered during our telephone interviews, we designed a survey instrument to further define the role of those factors.

Our survey responses reveal four key assets in the development of technological entrepreneurship initiatives: (1) championing by the entrepreneurship center director (mean 2.00); (2) sufficient quality of courses (mean 1.83); (3) championing by alumni and current students (mean 1.67); and (4) using entrepreneurs as guest lecturers/mentors (mean 1.67). Respondents classified each of these influential factors as at least a "minor asset," and the overall rating had a standard deviation of 0.52 or less, meaning there was general consensus among respondents. The first, third and fourth factors point to the importance of broad-based support, from the external community and from the internal community. The second factor illustrates the importance of intellectually rigorous coursework.

Five factors in our survey yielded responses where the standard deviation met or exceeded 1.38: (1) championing by the dean of the business school; (2) championing by the dean of the engineering school; (3) availability of internal capital; (4) availability of qualified tenure and tenure-track faculty; and (5) acceptance of the entrepreneurship curriculum within the university. A high standard deviation-the maximum possible was 2.00-indicates considerable variation between each respondents perceptions of the role that factor played in the development of this university's efforts to introduce engineering and science students to entrepreneurship. These responses indicate that context and environment are themselves influential.

In addition, the survey results indicate that a lack of space or time for elective credits in most engineering degree programs is an obstacle to introducing engineering and science students to entrepreneurship. This factor alone received a negative net rating (-0.80), with all respondents indicating either a neutral stance or designating it as an obstacle. At the University, for example, students needed to stay an additional semester beyond their engineering program to complete a Technological Entrepreneurship Certificate. Faculty members in the College of Business are in the process of trying to persuade their engineering colleagues to address the situation.

Other obstacles mentioned by at least one university during our interviews are overcoming faculty resistance; dealing with low levels of support from campus administrators; finding entrepreneurs willing to make the commitment to teaching, raising money for technological entrepreneurship efforts; and negotiating bureaucracy.

D. Update

The universities participating in this study have continued to pursue implementation of their initiatives targeted at providing entrepreneurship education for their engineering and science students. At universities the Technology Ventures Co-op has been endowed and new undergraduate entrepreneurship courses are available for non-fellows. The University has undergone major leadership changes at the center and school level, which in turn has resulted in waxing and waning of momentum and commitment to entrepreneurship education activity as a function of leadership support. A number of students have completed the Technological Entrepreneurship Certificate Program. At this stage all the faculty affiliated with the Center are from the School of Business, and therefore the University, it maybe changing from a Model C to a Model A approach. An undergraduate certificate of entrepreneurial excellence has been implemented, which requires the student to achieve a minimum of 3.3 GPA, to complete of an approved internship, and to pass a certificate exam.

The study has demonstrated the most dramatic change in approach-reflecting turnover in all the key leadership positions (president, provost, dean of engineering and dean of management) since the initial study was completed. The implementation approach has changed from Model A to Model C. A curriculum proposal for a new engineering entrepreneurship degree program (developed by engineering and management faculty and funded by the provost's office) was approved by the curriculum committees of both the engineering and management schools. However submission to the institute wide curriculum committee was delayed when the strategic plan developed under the leadership of a new president called for a university-wide general curriculum requirement in entrepreneurship. Currently, a university-level committee, headed by the provost and including the dean of engineering and the director of the entrepreneurship center from the school of management, is developing novel curricular and extracurricular programs for implementing the university wide strategic focus on "scientific and technological entrepreneurship.

The four key assets cited above in the previous subsection on factors have continued to play an important role at each university in the development of these entrepreneurship education initiatives. However, as the programs have moved beyond startup, in the most dynamic cases additional factors have gained in importance, particularly senior leadership (president, provost, and deans), and external funding/donors.

IV. IMPLICATIONS

We hope that this discussion of the models and influential factors relevant to introducing engineering and science students to entrepreneurship provides guidance for other universities embarking on this process.

Given the newness of the field of technological entrepreneurship and its location at the boundary of academia and practice, legitimacy is clearly an issue. Our results suggest three legitimacy-enhancing strategies. First, a strong entrepreneurship center director is necessary to effectively champion the linking of engineering and science students to entrepreneurship courses. Second, a program that establishes links to practicing entrepreneurs will lend credibility to course content and, frequently, generate financial support for the program and the university. Lastly, ensuring that courses are well-constructed and rigorous will foster approval from the university community at large.

These findings validate and extend the strategies discovered by the second author in his study of infusing entrepreneurship into the core business curriculum: "Promote collaboration among entrepreneurship and non-entrepreneurship faculty.. recruit an entrepreneurship program director, as well as entrepreneurship faculty and program staff, who are effective champions both internally and externally build and leverage a network of entrepreneurs and other supporters of the entrepreneurship center and enlist the key administrative leaders as champions".

As is true with all multiple case-study research, this study captures only a defined period of time. Suggestions for future research include revisiting each of the six universities to note areas of progress and new and continuing obstacles. Additionally, this study could serve as a pilot for a wider study of technological entrepreneurship initiatives at all universities.

Human resource: The next process of BPO

Human resource: The next process of BPO

The human resources (HR) department is critical for employee well-being in any business, no matter how small. Just as companies have realised the importance of customers and are taking proactive steps to ensure their satisfaction, they have also recognised the key role played by its employees in winning the battle of the marketplace. A motivated and innovative employee can work wonders for a company. Hence getting and retaining a motivated workforce has found its way on to the CEO’s agenda. Also, the slowing economy has forced the workforce to be productive; once employees become productive, companies want to retain them at any cost.

This has meant a shift in the focus of HR departments from routine activities to playing a more proactive role of constantly motivating and retaining employees. Usually, HR departments are inundated with work related to employees; some of these activities are routine, repairing little imagination and creativity. That is, most of the time the HR department does activities ‘of’ the employees and not ‘for’ the employees. Some such mundane HR responsibilities include payroll, benefits, hiring, firing, and keeping up-to-date with state and central tax laws. Companies the world over are spending resources and time on such critical activities (motivating and retaining the existing workforce) and hence prefer to outsource routine activities.

Historically, reducing operating costs has been the main reason for outsourcing. However, access to best practices, latest technology, and faster turnaround are some of the other benefits that outsourcing provides. Outsour-cing has now become an important element of business performance transformation, as this allows resources to be concentrated on core competencies. Outsourcing allows HR to make a stronger and formidable contribution to the growth and well-being of the company.

HR outsourcing is defined as “a process of outsourcing involving particular tasks like recruitment, making payroll, employee benefits administration, fixed assets administration, employee logistics management, training and development to a third party having expertise in these respective fields”.

The various HR functions in any organisation include:

· Payroll administration (producing cheques, handling taxes, dealing with sick time and vacations)
· Employee benefits (health, medical, life insurance, cafeteria, etc)
· Human resource management (workers’ compensation, dispute resolution, safety inspection, office policies and handbooks) and others.

Many of these functions are of an outsourceable nature. According to Hewitt Associates, over the past 20 years, the HR outsourcing marketplace has evolved beyond benefits administration. Companies are outsourcing more HR activities to achieve a fundamental shift from an administrative, tactical and compliance-driven function to a focus on the strategic acquisition, motivation and retention of talent. The transactional functions must be done right and can be handled with greater quality and efficiency by a provider who has the process and technology expertise.

The market is growing at an amazing pace. Estimates of industry growth vary from a compound annual rate (CAGR) of 8.6 percent (Gartner Dataquest) to 12 percent (Yankee Group) from 2001 through 2007. Which means that the worldwide HR outsourcing market is set to grow from $21.7 billion in 2000 to about $58.5 billion in 2005.

If forecasts are anything to go by, this is the fastest growing outsourcing segment.

Reasons for outsourcing HR

According to analysts, cost reduction is usually the most crucial reason for HR outsourcing as it can lead to savings of 30-40 percent for companies.

The other reasons are:

· Cost-effectiveness
· Reduced administrative costs
· Capitalising on technological advances/expertise
· Improved customer service
· Redirecting HR focus toward strategy/planning
· Focus on core business
· Reduced corporate overheads
· Provision of ‘seamless’ delivery of services
· Insufficient staff.

Process of outsourcing HR

The process of outsourcing begins with identifying core and non-core activities, i.e. activities the companies need to do in-house, as against outsourceable activities.

The next step is finding a suitable vendor who can carry out such activities on behalf of the company. HR outsourcing service providers can fall into one of four categories:

· Professional employer organisation (PEO)
· Business process outsourcing (BPO)
· Application service providers (ASP)
· e-services

A professional employer organisation (PEO) takes legal responsibility for the employees. The PEO and business owner are partners, with the PEO handling HR aspects and the business owner handling all other aspects. BPO is a generic term and could refer to all fields and activities, but as far as HR is concerned, a BPO ensures that a company has access to the latest technology. Application service providers (ASPs) host HR software on the Web and rent it to users, while e-services are those HR activities that are Web-based.

HR outsourcing, though a slow starter, is now one of the fastest growing BPO domains worldwide; in fact such HR activities such as payroll outsourcing were among the first to be outsourced. One of the reasons for the slow start could be the nature of responsibilities the HR department has. The HR department is critical for employee well-being in any business, no matter how small, and any mix-up here can cause major legal problems for the business, as well as major employee dissatisfaction.

HR BPO companies

HR BPO vendors add value to the company by either putting in new technology or applying existing technology in a new way to improve a process.

This is to make sure that a company’s HR system is supported by the latest technologies, such as self-access and HR data warehousing. Some HR BPOs offer all related services in the HR domain, i.e. they offer an end-to-end system to meet all the company’s HR needs.

Other HR BPO firms allow companies to choose from various offerings (a la carte); companies can pick and choose from the various services on offer.

Typical services include:

· Payroll administration: Producing cheques, handling taxes, and dealing with sick time.
· Employee benefits: Health, medical, life, 401(k) plans, cafeteria plans, etc.
· HR management: Workers’ compensation, dispute resolution, safety inspection, office policies and handbooks.

As mentioned above, HR outsourcing is the fastest growing outsourcing domain. Several major joint ventures have been inked between large corporations and service providers for end-to-end HR outsourcing.

HR BPO, the next wave

HR outsourcing could well be the next big thing in India's BPO space.

Globally a $40-$60 billion industry, the HR BPO segment is still a fledgling in the Indian BPO industry. Yet to take off in a big way, it is however poised to be the next big thing in the BPO space. The opportunities are undoubtedly immense, with the global market growing at 14 percent per annum. The HR BPO industry could be divided into two categories—large multinational players such as Hewitt with an outsourcing centre in India, and secondly the pool of small outsourcers that cater to the local market (engaged in payroll processing). Midway are the few established third-party outsourcers who serve international clients.

The history of HR outsourcing (HRO) industry can be traced back to more than five decades, when ADP (Automatic Data Processing) set up its payroll processing services in the US. Today, the company has annual revenues of $7 billion and 40,000 associates. Global HR BPO players like Hewitt and Fidelity have set up operations in India. Chennai-based Secova eServices is the first third-party Indian HR BPO organisation in the country. V Chandrasekaran, Co-Founder and Chief Technology Officer, Secova eServices says, “At present the Indian share of the pie is a mere $43 million, according to Nasscom, which is insignificant compared to the overall opportunity.”


The Indian share of the pie is a mere $43 millionwhich is insignificant compared to the overall opportunity

Slow start

Despite its high potential, the HR BPO industry in India has not witnessed growth patterns that other BPO segments have achieved. The reason is obvious: It is a complicated process which requires strong domain knowledge. And this is not easy considering that fact that the US has 50 states with different taxation laws, federal laws, etc. “Lack of knowledge about taxation is the key factor, apart from understanding the culture, staffing, training, compensation and leave administration procedures,” says Rajiv Srivastava, Senior Vice President, Business Development, Dimensions BPO India. This HR BPO organisation has centres in Mumbai and Kochi catering to US-based clients. The company has tie-ups with many HR outsourcers for which it does backoffice work (mostly payroll processing).

Besides the fact that one has to know how to deduct the tax—based on state, marital status (single/divorce), work timings, etc—the cost of maintaining and constantly updating the system is very high. Dimensions BPO has a proprietary HRIS (Human Resources Information Sys-tem) that reduces the cost for its clients by almost 40 percent.

Services offered

Every HR BPO company offers payroll services, and later graduates to services like benefits; education/training; recruiting/staffing; and oth-ers. “At the higher end of the market you have large players such as Hewitt, offering a wide spectrum of services across all the segments. Outside of the top-tier vendors, the market is dispersed with a number of small organisations offering a limited range of services and serving limited geographies. Analysts have pointed out that there is a huge opportunity for a services provider catering to the mid-market. This is the space that Secova is targeting. We believe that we are pioneers in the mid-market HRO segment and leverage the ‘best-shore’ strategy,”. Secova eServices’ initial focus will be on health and welfare benefits administration, payroll and HRIS services for the mid-market that accounts for $13.2 billion of the HR BPO pie.

Like most HR outsourcing organisations in the US, Dimensions is in the process of starting self-services (like helpdesk). For instance if there is $2 less in somebody’s paycheque, the individual can call back for clarification and further action.

Future prospects

HR BPO is the least serviced segment of the BPO space, though the potential is enormous. Gartner has forecasted HR BPO to reach $51 billion to represent 39 percent of all BPO revenue by the end of this year. “Other analysts such as Bernstein and Everest Consulting have said that HRO is in the ‘cradle of opportunity’ and appears to be the best among various BPO opportunities for growth. In our specific market space, we see a $13.2 billion market of which $1.3 billion could be serviced offshore in the next three years,”.

Career options

The skills needed for HR BPO are more complicated than the simple customer care skills of language and accent. Domain expertise and knowledge of specific processes are must. “What is imperative in this space is domain expertise coupled with knowledge about specific legal, regulatory and compliance structures in the markets we service. For employees, this provides a learning opportunity and scope to build a fulfilling and long-lasting career,”.

Being an early player in this industry segment, it was not possible for the company to get trained people for the job. eSecova takes people with basic skills to handle voice and data services and trains them in the HR domain in addition to client-specific training. The company usually targets talent with experience in related fields such as healthcare and insurance.

Dimensions BPO hires graduates with commerce background for basic payroll work. They should of course have a good knowledge of accounting. For self-services the need is for graduates who can provide the relevant information to the callers.

“A career in HR BPO pays well as recruits tend to have a strong domain knowledge,” acknowledges Srivastava. As far as attracting talent is concerned, he concedes that there is no dearth of BCom graduates, but the problem is that they have to be clear that they want to be in the HR outsourcing industry, “They jump to other fields and the training is a waste. Retaining them is a tough task.”

Training focus

It is a formidable task for HR BPO companies to train their staff to be HR specialists, experts on US taxation laws, statutory compliances in the country, etc. At Secova, personnel get trained to be benefits counsellors and many of them get certified by professional bodies. Some payroll staff are certified by the American Payroll Association. The company provides the necessary training for certification over and above maintaining a learning environment.

Similarly, Dimensions BPO has a five-week long training by experts, mostly on payroll process (taxation laws, etc).

HR outsourcing is considered as the next big wave in India’s BPO scene. What is needed is concerted domain knowledge to get a larger share of the global HR BPO pie.

Transistor

Transistor

A transistor is a semiconductor device, commonly used as an amplifier or an electrically controlled switch. The transistor is the fundamental building block of the circuitry that governs the operation of computers, cellular phones, and all other modern electronics.

Because of its fast response and accuracy, the transistor may be used in a wide variety of digital and analog functions, including amplification, switching, voltage regulation, signal modulation, and oscillators. Transistors may be packaged individually or as part of an integrated circuit, which may hold a billion or more transistors in a very small area.

Introduction

Modern transistors are divided into two main categories: bipolar junction transistors (BJTs) and field effect transistors (FETs). Application of current in BJTs and voltage in FETs between the input and common terminals increases the conductivity between the common and output terminals, thereby controlling current flow between them. The transistor characteristics depend on their type. See Transistor models.

The term "transistor" originally referred to the point contact type, but these only saw very limited commercial application, being replaced by the much more practical bipolar junction types in the early 1950s. Ironically both the term "transistor" itself and the schematic symbol most widely used for it today are the ones that specifically referred to these long-obsolete devices. For a short time in the early 1960s, some manufacturers and publishers of electronics magazines started to replace these with symbols that more accurately depicted the different construction of the bipolar transistor, but this idea was soon abandoned.

In analog circuits, transistors are used in amplifiers, (direct current amplifiers, audio amplifiers, radio frequency amplifiers), and linear regulated power supplies. Transistors are also used in digital circuits where they function as electronic switches, but rarely as discrete devices, almost always being incorporated in monolithic Integrated Circuits. Digital circuits include logic gates, random access memory (RAM), microprocessors, and digital signal processors (DSPs).

Importance
The transistor is considered by many to be the greatest invention of the twentieth century. It is the key active component in practically all modern electronics. Its importance in today's society rests on its ability to be mass produced using a highly automated process (fabrication) that achieves vanishingly low per-transistor costs.

Although millions of individual (known as discrete) transistors are still used, the vast majority of transistors are fabricated into integrated circuits (often abbreviated as IC and also called microchips or simply chips) along with diodes, resistors, capacitors and other electronic components to produce complete electronic circuits. A logic gate consists of about twenty transistors whereas an advanced microprocessor, as of 2006, can use as many as 1.7 billion transistors (MOSFETs).

The transistor's low cost, flexibility and reliability have made it a universal device for non-mechanical tasks, such as digital computing. Transistorized circuits have replaced electromechanical devices for the control of appliances and machinery as well. It is often less expensive and more effective to use a standard microcontroller and write a computer program to carry out a control function than to design an equivalent mechanical control function.

Because of the low cost of transistors and hence digital computers, there is a trend to digitize information. With digital computers offering the ability to quickly find, sort and process digital information, more and more effort has been put into making information digital. As a result, today, much media data is delivered in digital form, finally being converted and presented in analog form by computers. Areas influenced by the Digital Revolution include television, radio, and newspapers.

Advantages of transistors over vacuum tubes

Prior to the development of transistors, vacuum (electron) tubes (or in the UK thermionic valves or just valves) were the main active components in electronic equipment. The key advantages that have allowed transistors to replace their vacuum tube predecessors in most applications are:
 Small size and minimum weight, allowing the development of miniaturized electronic devices.
 Highly automated manufacturing processes, resulting in low per-unit cost.
 Lower possible operating voltages, making transistors suitable for small, battery-powered applications.
 No warm-up period required after power application.
 Lower power dissipation and generally greater energy efficiency.
 Higher reliability and greater physical ruggedness.
 Extremely long life. Transistorized devices produced more than 30 years ago are still in service.
 Complementary devices available, facilitating the design of complementary-symmetry circuits, something not possible with vacuum tubes.
 Ability to control very large currents, as much as several hundred amperes.
 Insensitivity to mechanical shock and vibration, thus avoiding the problem of microphonics in audio applications.

Types
Transistors are categorized by:
 Semiconductor material : germanium, silicon, gallium arsenide, silicon carbide, etc.
 Structure: BJT, JFET, IGFET (MOSFET), IGBT, "other types"
 Polarity: NPN, PNP (BJTs); N-channel, P-channel (FETs)
 Maximum power rating: low, medium, high
 Maximum operating frequency: low, medium, high, radio frequency (RF), microwave (The maximum effective frequency of a transistor is denoted by the term fT, an abbreviation for "frequency of transition". The frequency of transition is the frequency at which the transistor yields unity gain).
 Application: switch, general purpose, audio, high voltage, super-beta, matched pair
 Physical packaging: through hole metal, through hole plastic, surface mount, ball grid array, power modules
Thus, a particular transistor may be described as: silicon, surface mount, BJT, NPN, low power, high frequency switch.

Transistor Biasing
The proper flow of zero signal collector current and the maintenance of proper collector emitter voltage during the passage of signal is known as transistor biasing.

The basic purpose of transistor biasing is to keep the base-emitter junction properly forward biased and collector-base junction properly reverse biased during the application of signal. This can be achieved with a bias battery or associating a circuit with a transistor. The latter method is more efficient and is frequently employed. The circuit which provides transistor biasing is known as biasing circuit. It may be noted that transistor biasing is very essential for the proper operation of transistor in any circuit.

Essentials of a Transistor Biasing Circuit
Transistor biasing is required for faithful amplification. The biasing network associated with the transistor should meet the following requirements:
 It should ensure proper zero signal collector current.
 It should ensure that VCE does not fall below 0.5V for Ge transistors and 1V for silicon transistors at any instant.
 It should ensure the stabilisation of operating point.

Stabilisation
The collector current in a transistor changes rapidly when:
i. The temperature changes,
ii. the transistor is replaced by another of the same type. This is due to the inherent variations of transistor parameters.

When the temperature changes or the transistor is replaced, the operating point (i.e. zero signal IC and VCE) also changes. However, for faithful amplification, it is essential that operating point remains fixed. This necessitates to make the operating point independent of these variations. This is known as stabilisation.

The process of making operating point independent of temperature Changes or variations in transistor parameters is known as stabilisation.

Once stabilisation is done, the zero signal IC. and VCE become independent of temperature variations or replacement of transistor i.e. the operating point is fixed. A good biasing circuit always ensures the stabilisation of operating point.

Stability Factor
It is desirable and necessary to keep IC constant in the face of variations of ICBO (sometimes represented as ICO). The extent to which a biasing circuit is successful in achieving this goal is measured by stability factor S. It is defined as under:
The rate of change of collector current IC w.r.t. the collector leakage current *ICO at constant  and IB is called stability factor i.e.
Stability factor, S = at constant IB and 
The stability factor indicates the change in collector current IC due to the change in collector leakage current ICO. Thus a stability factor 50 of a circuit means that IC changes 50 times as much as any change in ICO. In order to achieve greater thermal stability, it is desirable to have as low stability factor as possible. The ideal value of S is 1 but it is never possible to achieve it in practice. Experience shows that values of S exceeding 25 result in unsatisfactory performance.
The general expression of stability factor lot a C.E. configuration can be obtained as under:
IC = IB + (+1) ICO
** Differentiating above expression w.r.t IC, we get,

1=  +(+1)
1=  +
S=
Need for stabilisation.
Stabilisation of the operating point is necessary due to the following reasons:-
(i) Temperature dependence of IC (ii) Individual variations (iii) Thermal runway.
(i) Temperature dependence of IC.
The collector current IC is given by;
IC=IB+ICEO = IB+(+1)ICBO
The collector leakage current ICBO is greatly influenced (especially in germanium transistor) by temperature changes. A rise of 10°C doubles the collector leakage current which may be as high as 0.2 mA for low powered germanium transistors. As biasing conditions in such transistors are generally so set that zero signal IC = 1mA, therefore, the change in IC due to temperature variations cannot be tolerated. This necessitates to stabilise the operating point i.e. to hold IC constant in spite of temperature variations.
(ii) Individual variations.
The value of  and VBE are not exactly the same for any two transistors even of the same type. Further, VBE itself decreases when temperature increases. When a transistor is replaced by another of the same type, these variations change the operating point. This necessitates to stabilise the operating point i.e. to hold IC constant irrespective of individual variations in transistor parameters.
(iii) Thermal runway.
The collector current for a CE configuration is given by ;
IC =IB + ( + 1) ICBO......(i)
The collector leakage current ICBO is strongly dependent on temperature. The flow of collector current produces heat within the transistor. This raises the transistor temperature and if no stabilisation is done, the collector leakage current ICBO also increases. It is clear from e.q. (i) that if ICBO increases, the collector current IC increases by ( + 1 ) ICBO. The increased IC will raise the temperature of the transistor, which in turn will cause ICBO to increase. This effect is cummulative and in a matter of seconds, the collector current may become very large, causing the transistor to bum out.
The self-destruction of an unstahilised transistor is known as thermal runway.
In order to avoid thermal runway and consequent destruction of transistor, it is very essential that operating point is stabilised i.e. IC is kept constant. In practice, this is done by causing IB to decrease automatically with temperature increase by circuit modification. Then decrease in IB will compensate for the increase in (+ 1 ) ICBO, keeping IC nearly constant. In fact. this is what is always aimed at while building and designing a biasing circuit.

Methods of Transistor Biasing
In the transistor amplifier circuits drawn so far biasing was done with the aid of a battery VBB which was separate from the battery VCC used in the output circuit. However, in the interest of simplicity and economy, it is desirable that transistor circuit should have a single source of supply-the one in the output circuit (i.e. VCC). The following are the most commonly used methods of obtaining transistor biasing from one source of supply (i.e. VCC):
(i) Base resistor method (ii) Biasing with feedback resistor
(iii) Voltage-divider bias.
In all these methods, the same basic principle is employed i.e. required value of base current (and hence IC) is obtained from VCC in the zero signal conditions. The value of collector load RC is selected keeping in view that VCE should not fall below 0.5V for germanium transistors and 1V for silicon transistors.
Base Resistor Method
In this method, a high resistance RB (several hundred K) is connected between the base and +ve end of supply for npn transistor and between base and negative end of supply for pnp tran¬sistor. Here, the required zero signal base current is provided by VCC and it flows through RB. It is because now base is positive w.r.t emitter i.e. base emitter junction is forward biased. The required value of zero signal base current IB (and hence IC = IB) can be made to flow by selecting the proper value of base resistor RB.

Circuit analysis.
It is required to find the value of RB so that required collector current flows in the zero signal conditions. Let IC be the required zero signal collector current.
IB =
Considering the closed circuit ABENA and applying Kirchhoff’s voltage law, we get, VCC = IB RB + VBE
or IB RB.= VCC - VBE
RB=
As VCC and IB are known and VBE can be seen from the transistor manual, therefore, value of RB can be readily found from exp. (i).
Since VBE is generally quite small as compared to VCC. the former can be neglected with little error. It then follows from exp. (i) that
RB=
It may be noted that VCC is a fixed known quantity and IB is chosen at some suitable value. Hence, RB can always be found directly, and for this reason, this method is sometimes called fixed-bias method.
Stability factor
Stability factor,S = S=
In fixed-bias method of biasing, IB is independent of IC so that dIB/dIC = 0. Putting the value of dIB/dIC = 0 in the above expression, we have,
Stability factor,S = + 1
Thus the stability factor in a fixed bias is ( + 1). This means that IC changes ( + 1) times as much as any change in ICO. For instance, if  = 100, then S = 101 which means that IC increases 101 times faster than ICO. Due to the large value of S in a fixed bias, it has poor thermal stability.
Advantages
i. This biasing circuit is very simple as only one resistance RB is required.
ii. Biasing conditions can easily be set and the calculations are simple.
iii. There is no loading of the source by the biasing circuit since no resistor is employed across base-emitter junction.
Disadvantages
i. This method provides poor stabilisation. It is because there is no means to stop a self-increase in collector current due to temperature rise and individual variations. For example, if  increases due to transistor replacement, then IC also increases by the same factor as IB is constant.
ii. The stability factor is very high. Therefore, there are strong chances of thermal runway. Due to these disadvantages, this method of biasing is rarely employed.

Biasing with Feedback Resistor
In this method, one end of RB is connected to the base and the other end to the collector. Here, the required zero signal base current is determined not by VCC but by the collector-base voltage VCB. It is clear that VCB forward biases the base-emitter junction and hence base current IB flows through RB. This causes the zero signal collector current to flow in the circuit.

Circuit analysis. The required value of RB needed to give the zero signal current IC can be determined as follows.
VCC = *IC RC + IB RB + VBE
RB =
=
Alternatively
VCE = VBE + VCB
VCB = VCE – VBE
RB = = where IB =
It can be shown mathematically that stability factor S for this method of biasing is less than ( + 1) i.e.
Stability factor, S < ( + 1 ) Therefore this method provides better thermal stability than the fixed bias. Advantages i. It is a simple method as it requires only one resistance RB. ii. This circuit provides some stabilisation of the operating point as discussed below: VCE = VBE+VCB Suppose the temperature increases. This will increase collector leakage current and hence the total collector current. But as soon as collector current increases, VCE decreases due to greater drop across RC. The result is that VCB decreases i.e. lesser voltage is available across RB. Hence the base current IB decreases. The smaller IB tends to decrease the collector current to original value. Disadvantages i. The circuit does not provide good stabilisation because stability factor is fairly high, though it is lesser than that of fixed bias. Therefore, the operating point does change. although to lesser extent, due to temperature variations and other effects. ii. This circuit provides a negative feedback which reduces the gain of the amplifier as explained hereafter. During the positive half-cycle of the signal, the collector current increases. The increased collector current would result in greater voltage drop across RC. This will reduce the base current and hence collector current. Voltage Divider Bias Method This is the most widely used method of providing biasing and stabilisation to a transistor. In this method, two resistances R1 and R2 are connected across the supply voltage VCC and provide biasing. The emitter resistance RE provides stabilisation. The name “voltage divider” comes from the voltage divider formed by R1 and R2. The voltage drop across R2 forward biases the base-emitter junction. This causes the base current and hence collector current flow in the zero signal conditions. Circuit analysis. Suppose that the current flowing through resistance R1 is I1. As base current IB is very small, therefore, it can be assumed with reasonable accuracy that current flowing through R2 is also I1. (i) Collector current IC: I1 = Voltage across resistance R2, V2 = Applying Kirchhoff’s voltage law to the base circuit. V2 = VBE+VE V2 = VBE+IERE IE = IE = IC IC = It is clear from exp. (i) above that IC does not at all depend upon . Though IC depends upon VBE but in practice V2 >> VBE so that IC is practically independent of VBE. Thus IC in this circuit is almost independent of transistor parameters and hence good stabilisation is ensured. It is due to this reason that potential divider bias has become universal method for providing transistor biasing.
(ii) Collector-emitter voltage VCE.
Applying Kirchhoff’s voltage law to the collector side,
VCC = ICRC +VCE +IERE
= ICRC +VCE +ICRE
= IC (RC+RE)+VCE
VCE = VCC-IC(RC+RE)
Stabillsalion In this circuit, excellent stabilisation is provided by RE. Consideration of eq. (i) reveals this fact.
V2 = VBE+ICRE
Suppose the collector current IC increases due to rise in temperature. This will cause the voltage drop across emitter resistance RE to increase. As voltage drop across R2 (i.e. V2) is *independent of IC, therefore, VBE decreases. This in turn causes IB to decrease. The reduced value of IB tends to restore IC to the original value.
Stability factor. It can be shown mathematically that stability factor of the circuit is given by:
Stability factor, S =
= (+1) x where RT =
If the ratio RT/RE is very small, then RT/RE can be neglected as compared to 1 and the stability factor becomes:
Stability factor= (+ 1) x = 1
This is the smallest possible value of S and leads to the maximum possible thermal stability. Due to design **considerations, RT/RE has a value that cannot be neglected as compared to 1. In actual practice, the circuit may have stability factor around 10.

This Projects Contains a lot of chemical equations and diagrams, so you can download the transistor.doc from the below given link.
http://hyperfileshare.com/d/ef4933fd

T.T.T Diagram

T.T.T Diagram

Introduction

T (Time) T (Temperature) T (Transformation) diagram is a plot of temperature verses the logarithm of time for a steel alloy of definite composition. It is used to determine when transformations begin & end for an isothermal (constant temp.) heat treatment of a previously austenitized alloy. When austenite is cooled slowly to a temp. below LCT (Lower Critical Temp.), the structure that is formed is pearlite. As the cooling rate is increases, the pearlite transformation temp. gets lower. The microstructure of the material is significantly altered as the cooling rate increases. By heating & cooling a series of samples, the history of the austenite transformation may be recorded.

The principle source of information on the actual process of austenite decomposition under non-equillibrium condition is the T.T.T diagram, which relates the transformation of austenite to the time & temp. conditions to which it is subjected.

Importance of T.T.T Diagram:-
1. It indicates the phases existing in steel at various temp. verses time.
2. Due to this diagram one can choose proper heating & cooling cycle to obtain different property in the component.
3. It indicates when a specific transformation starts & ends and it also shows what percentage of transformation of austenite at a particular temp. is achieved.
Steps to construct a T.T.T Diagram
1. Obtain a large no. of relatively small specimens & placed them in a molten salt bath held at the proper austenitizing temp. for long period to form complete austenite.
2. When austenilized, the samples are quickly transformed to an other molten salt bath held at the desire reaction below A1.
3. After a given specimen has been allowed to react isothermally for a certain time. It is quenched in cold water or iced brine.
The first specimen may be allowed to react isothermally for 2 secs, second for 4 secs, third for 8 secs, fourth for 15 secs & so on upto say 15 hours.
4. As the specimen is quenched in water this stops the isothermal reaction by causing the remaining austenite to change almost instantly to martensite.
In microstructure both pearlite & martensite can be seen. Pearlite is the result of isothermal heat treatment & its amount depends upon the time permitted for isothermal reaction to continue. Martensite is the result of water quenching of the specimen after the isothermal heat treatment.
5. When large no. of specimens isothermally reacted for varying time periods are metallographically examined.
6. The result obtained from a series of isothermal reaction curves over the whole temp. range of austenite instability for a given composition of steel is summarized, the result is T.T.T diagram for that steel.
T.T.T Diagram for an Eutectoid Steel:-
 Austenite is stable above A1 temp. line & below this lne austenite is unstable i.e. it can transform into peralite, bainite or martensite.
 In addition to the variation in the rate of transformation with temp., there are variations in the structure of the transformation products also.
 Transformations at temperatures between approximately 1300F & 1020F (550C) result in the charecteristics lamellar microstructure of peralite. At a temp. just below A1 line, nucleation of cementite from austenite will be very slow. But diffusion & growth of nuclei will proceed at maxm speed, so that there will be few large lamellar & the peralite will be coarse.
However the transformation temp. is lowered i.e. it is just above the nose of the c-curve, the peralite becomes fine.
 At temp. between 1020F & 465F, transformation becomes more sluggish as the temp. falls, for although austenite becomes increasingly unstable. The slower rate of diffusion of carbon atoms in austenite at lower temperatures outstrip the increased urge of the austenite to transform. In this temp. range the transformation product is bainite.

Bainite consists of a ferrite matrix in which particles of cementite are embedded. The individual particles are much finer than in pearlite.

The appearance of bainite may vary between feathery mass of fine cementite & ferrite for bainite formed around 900F & dark acicular crystals for bainite formed in the region of around 600F.
 At the foot of the T.T.T diagram, there are two lines Ms (240C or 465F) & Mf(-50C).
Ms reperesents the temp. at which the formation of martensite will start & Mf, the temp. at which the formation of martensite will finish during cooling of austenite through this range. Mf is a fairly low temp.
 Martensite is formed by the diffusionless transformation of austenite on rapid cooling to a temp. below 240C designated as Mf temp.
The martensitic transformation differs from the other transformation in that it is independent of holding time & occurs almost instantaneously. The proportion of austenite transformed to martensite depends only on temp. to which it is cooled.

T.T.T Diagram for Hypoeutectoid Steel
In hypoeutectoid steel, proeutectoid phase separates out in upper temp. region. For this type of steels, ferrite starts separating out from the austenite as soon as austenite is cooled below the critical temp. (A3). The amount of proeutectoid ferrite decreases as austenite is undercooled more & more below the critical temp. after a certain degree of undercooling, austenite will transform directly to pearlite. On further cooling, there will be no surplus ferrite.

T.T.T Diagram for Hypereutectoid Steel
Similar to hypereutectoid steel, in hypereutectoid steel proeutectoid phase separates out in upper temp. region. Here cementite is separates out from austenite on cooling below the upper critical temp. (Acm). The amount of cementite decreases with increased degree of supercooling & finally reduces to zero when austenite is cooled below a particular temp.

Effect of alloying elements on T.T.T Diagram
Almost all alloying elements except cobalt, decrease both the tendency & rate of decomposition of austenite due to austenite stabilizing elements. Alloy carbides are more stable than cementite as they retard the diffusion of carbon which inturn decreases the decomposition of austenite. T.T.T diagram alloy steels can broadly be classified into 4 types.

The first type of T.T.T diagram is similar to that of carbon steel. There is practically no difference in the pattern of austenite decomposition in the presence of non-carbide forming elements, supercooled austenite decomposes to a mixture of ferrite & carbides rather than to an aggressive to ferrite & cementite. Generally plain carbon steels exhibit this type of diagram.

The second type of T.T.T diagram differs from the remaining T.T.T diagram as it consists of two minima with respect to the stability of austenite. The upper bay (at higher temp.) corresponds to the transformation of austenite to pearlite, where as the lower bay corresponds to the transformation of austenite to bainite. Very few steels exhibit such a T.T.T diagram. This type of T.T.T diagram generally observed for low alloy steels.

The third type of T.T.T diagram is peculiar that means bainitic region is not present. This implies that bainite can not be formed in these steels. Such a T.T.T diagrams obtained in general, for high alloy steels. Specially those in which start of martensite transformation temp. has been shifted to sub zero region. In such steel, stable austentite structure is obtained at room temp.

The fourth type of T.T.T diagram does not exhibit pearlitic bay. Here, under normal cooling conditions, either bainite or martensite is formed. Such T.T.T diagram is obtained for special alloy steels.

Limitations of T.T.T Diagram
 In practice transformation during heat treatment occurs by continuous cooling and not isothermally.
 For most of the heat treatment processes, these diagrams are useful only qualitatively and not quantitatively.


Conclusion

The T.T.T diagram have gained great importance from the heat treaters point of view. This is due to the simple reason that these diagrams are extremely used as they give information about the hardening response of steels & the nature of transformed products of austenite at varying degree of supercooling.

JUNCTION FIELD EFFECT TRANSISTOR (JFET)

JFET

INTRODUCTION

The field effect transistor (FET) is a transistor that relies an electric field to control the shape and hence the conductivity of a “channel” in a semiconductor materials. FETs are sometimes used as voltage-controlled resistors. The concepts related to the field effect transistor (BJT). Nevertheless, FETs were implemented only of other BJTs due to the simplicity of manufacturing, BJTs over FETs at the time. The FET is a unipolar transistor and involves only one type of charge carrier (electrons or holes) in their operation.

A junction field effect transistor is a three terminal semiconductor device in which current conduction is by one type of carrier i.e. electrons or holes.
The JFET was developed about the same time as the transistor but it came into general use only in the late 1960s. In a JFET, the current conduction is either by electrons or holes and is controlled by means of an electric field between the gate electrode and the conductivity channel of the device. The JFET has high input impedance and low noise level.

CONSTRUCTIONAL DETAILS
A JFET consist of a p-type or n-type silicon bar containing two pn junctions at the sides as shown in fig. The bar forms the conducting channel for the charge carriers. If the bar is of n-type, it is called an n-channel JFET as shown in fig. and if the bar is of p-type it is called a p-channel JFET as shown in fig. The two pn junctions forming diodes are connected internally and a common terminal called gate is taken out. Other terminals are sources & drain taken out of the bar. This JFET has essentially three terminals, gate (G), source (S), drain (D).

WORKING PRINCIPLE
Fig shows the circuit of n-channel JFET with normal polarities. The circuit action is as follows:
When a voltage VDS is applied between drain and source terminals and voltage on the gate is zero, the two PN junctions at the sides of the bar establish layers. The electrons will flow from source to drain through a channel between the depletion layers. The size of these layers determines the width of the channel and hence the current conduction through the bar.
When a reverse voltage VGS is applied between the gate and source, the width of the depletion layers is increased. This reduces the width of conducting channel, thereby increasing the resistance of n-type bar. Consequently, the current from sources to drain is decreased. On the other hand, if the reverse voltage on the gate is decreased, the width of the depletion layers also decreases. This increases the width of the conducting channel and hence source to drain current.
It is clear from the above discussion that current from source to drain can be controlled by the application of potential (i.e. electric field) on the gate. This reason, the device is called field effect transistor. It may be noted that a p-channel JFET appears in the same manner as an n-channel JFET except that channel current carrier will be the holes instead of electrons and the polarities of VGS and VDS are reversed.

IMPORTANCE OF JFET
A JFET acts like a voltage controlled device i.e. input voltage (VGS) controls the output current. This is different from ordinary transistor (or bipolar transistor) where input current controls the output current. Thus JFET is a semiconductor device acting like a vacuum tube. The need for JFET arises because as modern electronic equipment became increasingly transistorized, it became apparent that there were may functions in which bipolar transistors were unable to replace vacuum tubes. Owing to their extremely high input impedence. JFET devices are more like vacuum tubes than are the bipolar transistors and hence are able to take over many vacuum tube functions. Thus, because of JFET, electronic equipment is closer today to being completely solid state.
The JFET devices have not only taken over the functions of vacuum tubes but they now also threaten to depose the bipolar transistors as the most widely used semiconductor devices. As an amplifier, the JFET has higher input impedence then that of a conventional transistor generates less noise and has greater resistance to nuclear radiations.

DIFFERENCE BETWEEN JFET AND BIPOLAR TRANSISTOR
The JFET differs from an ordinary or bipolar transistor in the following ways:
In a JFET, there is only one type of carrier, holes in p-type channel and electrons in n-type channel. For this reason, it is also called a unipolar transistor. However, in an ordinary transistor, both holes and electrons play part in conduction. Therefore, an ordinary transistor is sometimes called a bipolar transistor.
As the input current (i.e. gate to source) of a JFET is reverse biased, therefore, the device has high input impedence. However, the input circuit of an ordinary transistor is forward biased and hence has low input impedence.
As the gate is reverse biased, therefore it carries vey small current. Obviously, JFET is just like a vacuum tube where control grid (corresponding to gate in JFET) carries extremely small current and input voltage controls the output current. For this reason, JFET is essentially a voltage driven device. However, ordinary transistor is a current operated device i.e., input current controls the output current.
A bipolar transistor uses a current into its base to control a large current between collector and emitter where as a JFET uses voltage on the gate (=base) terminal to control the current between drain (=collector) and source (=emitter). Thus a bipolar transistor gain is characterized by current gain whereas the JFET gain is characterized as a transconductance i.e., the ratio of change in output current (drain current) to the input (gate) voltage.
In JFET, there are no junctions as in as ordinary transistor. The conduction is through an n-type or p-type semiconductor materials. For this reason, noise level in JFET is very small.

JFET AS AN AMPLIFIER
The weak signal is applied between gate and source and amplified output is obtained in the drain source circuit. For the proper operation of JFET, the gate must be negative w.r.t. source i.e., input circuit should always be reverse biased. This is achieved either by inserting a battery VGG in the gate circuit or by a circuit known as biasing circuit. In the present case, we are providing biasing by the battery VGG.
A small charge in the reverse biased on the gate produces a large change in drain current. This fact makes JFET capable of raising the strength of a weak signal. During the positive half of signal, the reverse bias on the gate decreases. This increases the channel width and hence the drain current. During the negative half cycle of the signal, the reverse voltage on the gate increases. Consequently, the drain current decreases. The result is that a small charge in voltage at the gate produces a large change in drain current. These large variations in drain current produce large output across the load RL. In this way, JFET acts as an amplifier.

OUTPUT CHARACTERISTICS OF JFET
The curve between drain current (ID) and drain source voltage (VDS) of a JFET at constant gate source voltage (VGS) is known as output characteristics of JFET. The circuit for determining the output characteristics of JFET at VGS=IV. Repeating similar procedure, output characteristics at other gate source voltage can be drawn.
The following points may be noted from the characteristics.
At first, the drain current ID rises rapidly with drain source voltage VDS but then becomes constant. The drain source voltage above which drain current becomes constant is known as pinch off voltage. Thus
After pinch off voltage, the channel width becomes so narrow that deplection layers almost touch each other. The drain current passes through the small passage between these layers. Therefore increase in drain current is very small with VDS above pinch off voltage. Consequently, drain current remains constant.
The characteristics resemble that of a pentode valve.

IMPORTANT TERMS
In the analysis of a JFET circuit, the following important terms are often used.
Shorted gate drain current (IDSS)
Pinch off voltage(VP)
Gate source cut off voltage (VGSCOFF)
Shorted gate drain current (IDSS)
It is the drain current with source short circulated to gate (i.e. VGS=0) and drain voltage (VDS) equal to pinch off voltage. It is sometimes called zero-bias current.
The JFET circuit with VGS = 0 i.e., source shorted circuited to gate. This is normally called shorted gate condition. The drain current rises rapidly at first and then levels off at pinch off voltage VP. The drain current has now reached the maximum value IDSS. When VDS is increased beyond VP, the depletion layers expand at the top of the channel. The channel now acts as a current limiter and holds drain current constant at IDSS.
The following points may be noted carefully
Since IDSS is measured under shorted gate conditions, it is the maximum drain current that we can get with normal operation of JFET.
There is a maximum drain voltage [VDS (max)] that can be applied to a JFET. If the drain voltage exceeds VDS (max), JFET would break down.
The region between VP and VDS(max) (breakdown voltage) is called constant current region or active region. As long as VDS is kept within this range, ID will remain constant for a constant value of VGS. In other wards, in the active region, JFET behaves as a constant current device. For proper working of JFET, it must be operated in the active region.
Pinch off voltage (VP)
It is the minimum drain source voltage at which the drain current essentially becomes constant.
The highest curve is for VGS = OV, the shorted gate condition. For values of VDS greater than VP, the drain current is almost constant. It is because when VDS equals VP, the channel is effectively classed and does not allow further increase in drain current. It may be noted that for proper function of JFET, it is always operated for VDS > VP. However, VDS should not exceed VDS (max) otherwise JFET may breakdown.
Gate source cut off voltage VGS (off)
It is the gate source voltage where the channel is completely cut off and the drain current becomes zero.
The idea of gate source cut off voltage can be easily understood if we refer to the transfer characteristics of a JFET. As the reverse gate source voltage is increased, the cross sectional area of the channel decreases. This is turn decrease the drain current. At some reverse gate source voltage the deplection layers extend completely across the channel. In this condition, the channel is cut off and the drain current reduces to zero. The gate voltage at which the channel is cut off is called gate source cut off voltage VGS.
VGS will always have the same magnitude value as VP. For example if VP=6V, then VGS=-6V. Since these two values are always equal and opposite, only one is listed on the specification sheet for a given JFET.
There is a distinct difference between VP and VGS. Note that VP is the value of VDS that causes the JFET to become a constant current device. It is measured at VGS = 0V and will have a constant drain current = IDSS. However, VGS is the value of VGS that causes ID to drop to nearly zero.

EXPRESSION FOR DRAIN CURRENT (ID)
The relation between IDSS and VP, we note that gate source cut off voltage [i.e. VGS] on the transfer characteristic is equal to pinch off voltage VP on the drain characteristics i.e. VP = [VGS(off)]
For example, if a JFET has VGS = -4V, then VP = 4V.
The transfer characteristic of JFET is part of a parabola. A rather complex mathematically analysis yields the following expression for drain current.
ID = IDSS[1-V_GS/V_(GS(off)) ]^2
ID = drain current at given VGS.
IDSS= shorted gate drain current
VGS= gate source voltage
VGS(off)=gate source cut off voltage.

PARAMETERS OF JFET
Like vacuum tubes, a JFET has certain parameters which determine its performance in a circuit. The main parameters of a JFET are (i) a.c. drain resistance (ii) transconductance (iii) amplification factor.
a.c. drain resistance (rd). Corresponding to the a.c. plate resistance, we have a.c. drain resistance in a JFET. It may be defined as follows.
It is the ratio of change in drain source voltage (VDS) to the change in drain current (ID) at constant gate source voltage i.e.
a.c. drain resistance, rd = 〖∆V〗_DS/〖∆I〗_D at constant VGS
For instance, if a change in drain voltage of 2V produces a change in drain current of 0.02mA, then
a.c. drain resistance, rd = 2V/0.02mA= 100KΩ
Referring to the output characteristics of a JFET, it is clear that above the pinch off voltage, the change in ID is small for a change in VDS because the curve is almost flat. Therefore, drain resistance of a JFET has a large value, ranging 10KΩ to 1MΩ.
Transconductance (gfs). The control that the gate voltage has over the drain current is measured by transconductance (gfs) and is similar to the transconductance gm of the tube. It may be defined as
It is the ratio of change in drain current (ID) to the change in gate source voltage (VGS) at constant drain source voltage i.e.
Transconductance, gfs = 〖∆I〗_D/〖∆V〗_GS at constant VDS
The transconductance of a JFET is usually expressed either in mA/volt or micromho. As an example, if a change in gate voltage of 0.1V causes a change in drain current of 0.3mA, then,
Transconductance, gfs = 0.3mA/0.1V
= 3mA/V
=3 x 10-3 A/V or mho
=3 x 10-3 x 106µmho
= 3000µmho
Amplification factor (µ). It is the ratio of change in drain source voltage (VDS) to the change in gate source voltage (VGS) at constant drain current i.e.,
Amplification factor, µ = 〖∆V〗_DS/〖∆V〗_GS at constant ID.
Amplification factor of a JFET indicates how much more control the gate voltage has over drain current then has the drain voltage. For instance, if the amplification factor of a JFET is 50, it means that gate voltage is 50 times as effective as the drain voltage is controlling the drain current.

RELATION AMONG JFET PARAMETERS:
The relation ship among JFET parameters can be established as under.
We know µ = 〖∆V〗_DS/〖∆V〗_GS
Multiplying the numerator and denominator on R.H.S by ID, we get,
µ= 〖∆V〗_DS/〖∆V〗_GS x 〖∆I〗_D/〖∆I〗_D = 〖∆V〗_DS/〖∆I〗_D x〖∆I〗_D/〖∆V〗_GS
µ= rd x gfs
amplification factor = a.c. drain resistance x transconductance

VOLTAGE GAIN OF JFET AMPLIFIER
The JFET is self biased by using the biasing network RS-CS. The d.c. component of the drain current flowing through the source biasing resistance RS produces the desired bias voltage. The capacitor CS by passes the a.c. component of drain current. It may be noted that biasing circuit is similar to the cathode biasing for a vacuum tube. The value of RS can be determined from the following relation.
RS = V_GS/I_D
Where VGS = voltage drop across RS and ID=current through RS.
Like a vacuum tube, a JFET is a voltage driven device. Therefore, the voltage gain of a JFET amplifier can be determined in the same manner as for a vacuum tube.
Voltage gain of JFET amplifiers is
AV = (μR_L)/(r_d+R_L )
Since µ = rd x gfs
AV = (r_d g_fs R_L)/(r_d+R_L )
If rd >> RL,
Then the letter can be neglected as compared to the former.
Voltage gain, AV = (r_d g_fs R_L)/r_d
AV = gfs x RL

JFET BIASING
For the proper operation of n-channel JFET, gate must be negative w.r.t. source. This can be achieved either by inserting a battery in the gate circuit or by a circuit known as biasing circuit. The latter method is preferred because batteries are costly and require frequent replacement.
Bias battery: the biasing of an n-channel JFET by a bias battery VGG. This battery ensures that gate is always negative w.r.t source during all parts of the signal.
Biasing circuit: The biasing circuit uses supply voltage VDD to provide the necessary bias. Two most commonly used methods are
Self bias
Potential divider method
Self bias:
The resistor RS is the bias resistor. The d.c component of drain current flowing through RS produces the desired bias voltage. The capacitor CS by passes the a.c. component of drain current.
Voltage across Rs, VS =IDRS
Since gate current is negligibly small, the gate terminal is at d.c ground i.e.,
VG = 0
VGS = VG-VS = 0-IDRS
VGS = -IDRS
Thus bias voltage VGS keeps gate negative w.r.t source.
Operating point. The operating point (i.e. zero signal ID and VDS) can be easily determined. Since the parameters of the JFET are usually known, zero signal ID can be calculated from the following relation.
ID = IDSS (1-V_GS/V_P )2
VDS = VDD-ID (RD+RS)
Thus d.c conditions of JFET amplifier are fully specified.
Potential divider method
This circuit is identical to that used for a transistor. The resistor R1 and R2 from a voltage divider across drain supply VDD. The voltage V2 across R2 provides the necessary bias.
V2 = V_DD/(R_1+R_2 ) x R2
Now V2 = VGS +IDRS
VGS = V2- IDRS
The circuit is so designed that IDRS is larger than V2 so that VGS is negative. This provides correct bias voltage. We can found the operating point as under
ID = (V_2-V_GS)/R_S
And VDS = VDD – ID (RD + RS)

JFET CONNECTIONS
There are three leads in a JFET viz, source, gate and drain terminals. However, when JFET is to be connected in a circuit, we require four terminal two for the input and two for the output. This difficulty is overcome by making one terminal of the JFET common to both input and output terminals. Accordingly, a JFET can be connected in a circuit in the following three ways.
Common source connection
Common gate connection
Common drain connection
The common source connection is the most widely used arrangement. It is because this connection provides high input impedence, good voltage gain and a moderate output impedence. However, the circuit produces a phase reversal i.e., output signal is 180 out of phase with the input signal.
A common source JFET amplifier is the JFET equivalent of common emitter amplifier. Both amplifiers have a 180 phase shift from input to output. Although the two amplifiers serve the same basic purpose, the means by which they operate are quite different.

ADVANTAGES OF JFET
A JFET is a voltage controlled, constant current device (similar to a vacuum pentode) in which variations in input voltage control the output current. It combines the many advantages of both bipolar transistor and vacuum pentode. Some of the advantages of a JFET are:
It has a very high input impedence (of the order of 100M). This permits high degree of isolation between the input and output circuit.
The operation of a JFET depends upon the bulk material current carriers that do not cross junctions. Therefore, the inherent noise of tubes (due to high temperature operation) and those of resistors (due to junction transistors) are not present in a JFET.
A JFET has negative temperature coefficient of resistance. This avoids the risk of thermal runway.
A JFET has a very high power gain. This eliminates the necessary of using driver stages.
A JFET has smaller size, longer life and high efficiency.

JFET APPLICATIONS
The high input impedence and low output impedence and low noise level make JFET for superior to the bipolar transistor. Some of the circuit applications of JFET are
(i) As a buffer amplifier. A buffer amplifier is a stage of amplification that isolates the preceding stage from the following stage. Because of the high input impedence and low output impedence, a JFET can act as an excellent buffer amplifier. The high input impedence of JFET means light loading of the preceding stage. This permits almost the entire output from first stage to appear at the buffer input. The low output impedence of JFET can drive heavy loads (or small load resistances). This ensures that all the output from the buffer reaches the input of the second stage.
(ii) Phase shift oscillators. The oscillators will also work with JFETs. However, the high input impedence of JFET is especially valuable in phase shift oscillators to minimize the loading effect.
(iii) As RF amplifier. In communication electronics, we have to use JFET RF amplifier in a receiver instead of BJT amplifier for the following reasons.
The noise level of JFET is very low. The JFET will not generate significant amount of noise and is thus useful as an RF amplifier.
The antenna of the receiver receives a very weak signal that has an extremely low amount of current. Since JFET is a voltage controlled device, it will well responds to low current signal provided by the antenna.

CONCLUSION
Though in its operation has high input impedence because of insulation at the gate and low output impedence, these are used as voltage amplifier.

BIBLIOGRAPHY:-

Principles of Electronics
By V.K.Mehta
Rohit Mehta
Foundation of Electronics
By P.C.Chottopadhay
 
SureJobs Network - Smart Employment Solutions