Power Spectrum Estimation Methods Advanced Signal Processing Toolkit. Ett effektspektrum beskriver energifördelningen av en tidsserie i frekvensdomänen. Energi är en reell värderad kvantitet, så effektspektret innehåller inte fasinformation eftersom en tidsserie kan innehålla icke-periodiska eller asynkront samplade periodiska signalkomponenter, anses frekvensspektrumet för en tidsserie typiskt vara en kontinuerlig frekvensfunktion. När du använder en serie diskreta frekvensfack för att representera den kontinuerliga frekvensen är värdet vid en specifik frekvensfack proportionell mot Frekvensintervallet För att avlägsna beroendet av frekvensintervallets storlek kan du normalisera effektspektrumet för att producera PSD-spektraldensiteten, vilket är effektspektrum dividerat med frekvensintervallets storlek. PSD mäter signalkraften per enhetens bandbredd för en tidsserie i V 2 Hz, vilket implicit förutsätter att PSD representerar en signal i volt som driver en 1 ohm belastning Om PSD representeras i en decibel dB, är motsvarande enhet för PSD dB ref V sqrt Hz Om du vill använda andra enheter för den uppskattade PSD för en tidsserie måste du skala tidsenheten serien till lämpliga tekniska enheter EU Efter att du har skala en tidsserie enhet kan du få motsvarande enhet för det linjära PSD-värdet och dB PSD-värdet som EU 2 Hz och dB ref EU kvadrat Hz Använd TSA-skalan till EU VI att skala enheten för en tidsserie till lämpliga EU. PSD uppskattningsmetoder klassificeras enligt följande. Parametriska metoder Dessa metoder är baserade på parametriska modeller av en tidsserie, såsom AR-modeller, glidande genomsnittliga MA-modeller och medelstarka ARMA-medel Modeller Därför är parametriska metoder också kända som modellbaserade metoder. För att uppskatta PSD för en tidsserie med parametriska metoder behöver du först få modellparametrarna i tidsserierna först. Du måste bygga en lämplig modell som korrekt reflekterar e-beteendet hos det system som genererar tidsserierna annars, kan den uppskattade PSD kanske inte vara pålitlig. MUSIC-metoden för multipel signalklassificering är också en modellbaserad spektralestimeringsmetod. Inga parametriska metoder Dessa metoder, som inkluderar periodogrammetoden Welch-metoden och Capon Metoden är baserad på den diskreta Fourier-transformen. Du behöver inte få parametrarna i tidsserierna innan du använder dessa metoder. Den primära begränsningen av icke-parametriska metoder är att beräkningen använder datafönstret vilket resulterar i förvrängning av de resulterande PSD-erna på grund av fönstereffekter. Viktiga fördelar med icke-parametriska metoder är robustheten, de uppskattade PSD-erna innehåller inte falska frekvenstoppar. Parametriska metoder använder däremot inte datafönster Parametriska metoder antar en signal som passar en viss modell De uppskattade PSD-erna kan innehålla falska frekvenstoppar om den antagna modellen är fel PSD beräknade med parametriska metoder är mindre förspända och har en lägre varia NCE än PSD beräknas med icke-parametriska metoder om den antagna modellen är korrekt. Storleken på PSD-värden som uppskattas med parametriska metoder är vanligtvis felaktiga. Notera Vid spektralanalys kan du använda genomsnittliga successiva spektrummätningar för att minska estimeringsavvikelsen och förbättra mätnoggrannheten. Använd TSA-genomsnittet PSD VI till medelvärdet uppskattas spektrumet kontinuerligt strömspektraltätheten givna ARMA-värden. Denna funktion beräknar värden för spektraldensitet som anges ARMA-parametrarna för en ARMA-modell. Det antas att drivsekvensen är en vit brusprocess med nollvärde och varians. Provtagningen Frekvens och brusavvikelse används för att skala PSD-utgången, vilken längd sätts av användaren med NFFT-parametern. En array Array av AR-parametrar komplexa eller real. B array Array av MA-parametrar komplexa eller real. rho float White noise variance to skala den återvände PSD. T float Provintervall i sekunder för att skala den återvände PSD. NFFT int Slutlig storlek på PSD. sidor str Standard PSD är dubbelsidig, men sidor kan ställas in till centerdc. By convention innehåller AR - eller MA-arrays inte A0 1-värdet. Om B är None, är modellen en ren AR-modell Om A är None, Modellen är en ren MA-modell. Statistik Current - Textbook. Structural Equation Modeling. A Conceptual Overview. Structural Equation Modeling är en mycket generell, mycket kraftfull multivariat analysteknik som innehåller specialiserade versioner av ett antal andra analysmetoder som speciella fall Vi kommer att anta att du är bekant med den grundläggande logiken för statistisk resonemang som beskrivs i Elementary Concepts Dessutom kommer vi också att anta att du är bekant med begreppen varians, kovarians och korrelation om inte, vi rekommenderar att du läser avsnittet Basic Statistics at den här punkten Även om det inte är absolut nödvändigt är det mycket önskvärt att du har lite bakgrund i faktoranalys innan du försöker använda strukturell modellering. Majorapplikationer av strukturell ekvationsmodellering Include. causal modellering eller väganalys som hypoteser orsakssamband mellan variabler och testar kausalmodellerna med ett linjärt ekvationssystem. Kausala modeller kan involvera antingen manifesta variabler, latenta variabler eller both. confirmatory factor analysis en förlängning av faktoranalys där specifika hypoteser om Strukturen av faktorbelastningarna och samkorrelationerna testas. andra orderfaktoranalys en variant av faktoranalys där korrelationsmatrisen hos de gemensamma faktorerna själv är faktoranalyserad för att tillhandahålla andra ordningsfaktorer. regressionmodeller en förlängning av linjär regressionsanalys där regression Vikter kan vara begränsade till att vara lika med varandra eller till specificerade numeriska värden. kovariansstrukturmodeller som hypoteser att en kovariansmatris har en viss form. Till exempel kan du testa hypotesen att en uppsättning variabler alla har lika stora avvikelser med denna procedur. Korrelationsstrukturmodeller som hypoteser att en co relativmatris har en viss form Ett klassiskt exempel är hypotesen att korrelationsmatrisen har strukturen hos en circumplex Guttman, 1954 Wiggins, Steiger, Gaelick, 1981. Många olika typer av modeller faller i var och en av ovanstående kategorier, så strukturell modellering som ett företag är mycket svårt att karakterisera. Mest strukturella ekvation modeller kan uttryckas som vägdiagram. Följaktligen kan även nybörjare i strukturell modellering göra komplicerade analyser med ett minimum av träning. Den grundläggande idén bakom strukturell modellering. En av de grundläggande idéerna som lärs i mellanliggande tillämpningar statistik kurser är effekten av additiv och multiplikativ omvandlingar på en lista med siffror Eleverna lärs att om du multiplicerar varje tal i en lista med en viss konstant K multiplicerar du medelvärdet av talen med K. På samma sätt multiplicerar du standardavvikelsen med absolutvärdet för K. For exempel, anta att du har listan med siffror 1,2,3 Dessa siffror har ett medelvärde av 2 och En standardavvikelse på 1 Nu antar du att du skulle ta dessa 3 nummer och multiplicera dem med 4 Då skulle medelvärdet bli 8 och standardavvikelsen skulle bli 4, variansen sålunda 16. Poängen är att om du har en uppsättning Siffror X relaterade till en annan uppsättning av tal Y med ekvationen Y 4X, då måste variansen av Y vara 16 gånger den av X, så du kan testa hypotesen att Y och X är relaterade av ekvationen Y 4X indirekt genom att jämföra variationerna Av Y - och X-variablerna. Denna idé generaliserar, på olika sätt, flera variabler som är relaterade till en grupp linjära ekvationer. Reglerna blir mer komplexa, beräkningarna är svårare, men det grundläggande meddelandet förblir detsamma - du kan testa huruvida variabler är inbördes sammanhängande genom en rad linjära relationer genom att undersöka variablernas varianser och kovarienser. Statistikister har utvecklat förfaranden för att testa huruvida en uppsättning av variationer och covariances i en kovariansmatris passar en specificerad struktur. Ural modellering fungerar som följer. Du anger hur du tror att variablerna är interrelaterade, ofta med hjälp av ett vägschema. Du utarbetar via några komplexa interna regler vilka konsekvenserna av detta är för avvikelserna och Covarians av variablerna. Du testar om varianserna och covarianserna passar denna modell av dem. Resultaten av den statistiska testningen och även parametrisuppskattningar och standardfel för de numeriska koefficienterna i de linjära ekvationerna rapporteras. På grundval av denna information bestämmer om modellen verkar vara en bra passform för dina data. Det finns några viktiga och väldigt grundläggande logiska punkter att komma ihåg om processen. För det första, även om matematiska maskiner som krävs för att utföra strukturella ekvationsmodellering är extremt komplicerade, är den grundläggande logiken belägen i ovanstående 5 steg Nedanför schematiserar vi processen För det andra måste vi komma ihåg att det är orimligt att förvänta sig att en strukturell modell passar perfekt av ett antal skäl. rukturalmodell med linjära relationer är bara en approximation. Världen är osannolikt att vara linjär. De sanna relationerna mellan variabler är förmodligen olinjära. Dessutom är många av de statistiska antagandena något tvivelaktiga. Den verkliga frågan är inte så mycket. Passar modellen Perfekt men ganska Passar det tillräckligt bra för att vara en användbar approximation till verkligheten och en rimlig förklaring av trenderna i våra data. För det tredje måste vi komma ihåg att helt enkelt eftersom en modell passar data väl betyder det inte att modellen nödvändigtvis är Korrekt Man kan inte bevisa att en modell är sann att hävda att det här är felaktigheten att bekräfta följden. Vi kan till exempel säga att om Joe är en katt, har Joe hår. Joe har dock inte något att Joe är en katt. På samma sätt kan vi säg att om en viss orsaksmodell är sant, kommer den att passa dataen. Modellen som passar data betyder emellertid inte nödvändigtvis att modellen är den rätta. Det kan finnas en annan modell som passar data lika bra. S strukturella ekvationsmodellering och bandiagrammet. Diagrammen spelar en grundläggande roll i strukturell modellering. Bandiagram är som flödesschema De visar variabler som är kopplade till linjer som används för att indikera kausalflöde. Man kan tänka sig ett vägdiagram som en enhet för att visa vilka variabler Orsaka förändringar i andra variabler. Bandiagram behöver dock inte anses strängt på detta sätt. De kan också ges en smalare och mer specifik tolkning. Beakta den klassiska linjära regressionsekvationen. En sådan sådan ekvation kan representeras i ett bandiagram enligt följande. Sådana diagram fastställer en enkel isomorfism Alla variabler i ekvationssystemet placeras i diagrammet, antingen i lådor eller ovaler. Varje ekvation representeras i diagrammet enligt följande. Alla oberoende variabler variablerna på höger sida av en ekvation har pilar som pekar på den beroende variabel Vägningskoefficienten placeras över pilen Ovanstående diagram visar ett enkelt linjärt ekvationssystem och i ts vägdiagram representation. Notice som förutom representerar linjära ekvationsrelationer med pilar innehåller diagrammen också några ytterligare aspekter För det första visas skillnaderna mellan de oberoende variablerna, som vi måste veta för att testa strukturrelationsmodellen, på Diagram med kurvade linjer utan pendlar Häftad Vi hänvisar till sådana linjer som trådar För det andra är vissa variabler representerade i ovaler, andra i rektangulära rutor Manifestvariabler placeras i rutor i bandiagrammet Latenta variabler placeras i en oval eller cirkel. Till exempel Variabel E i diagrammet ovan kan ses som en linjär regressionsrest när Y förutspås från X. En sådan rest observeras inte direkt, men beräknas från Y och X, så vi behandlar den som en latent variabel och placerar den i en oval. Det ovan beskrivna exemplet är en extremt enkel. Generellt är vi intresserade av att testa modeller som är mycket mer komplicerade än dessa. Som ekvationssystemen vi e Xamin blir alltmer komplicerat, så gör de kovariansstrukturer de innebär. I slutändan kan komplexiteten bli så förvirrande att vi förlorar syn på några väldigt grundläggande principer. För det ena har tanken av resonemang som stöder testkausala modeller med linjära strukturekvationer testning flera svaga länkar Variablerna kan vara icke-linjära De kan vara linjärt relaterade av orsaker som inte är relaterade till vad vi vanligtvis betraktar som orsakssamband. Det gamla ordspråket, korrelationen är inte orsakssambandet, är sann, även om korrelationen är komplex och multivariativ. Vad orsaksmodellering tillåter oss att göra är att Undersöka i vilken utsträckning data misslyckas med en rimlig lönsam konsekvens av en kausalitetsmodell Om det linjära ekvationssystemet isomorfiskt för bandiagrammet passar data väl, är det uppmuntrande men knappt bevis på kausalmodellens sanning. Även om vägdiagram kan användas för att representera kausalflöde i ett system med variabler, behöver de inte innebära ett sådant kausalflöde. Sådana di gram kan ses som enbart en isomorf representation av ett linjärt ekvationssystem Som sådana kan de förmedla linjära relationer när inga kausalrelationer antas. Även om man kan tolka diagrammet i ovanstående figur för att betyda att X orsakar Y, kan diagrammet tolkas också som en visuell representation av det linjära regressionsförhållandet mellan X och Y. Var detta ämne användbart. Feedback Submitted. Survival Failure Time Analysis. General Information. These tekniker utvecklades huvudsakligen inom medicinsk och biologisk vetenskap, men de är också brett Används i samhällsekonomiska och ekonomiska vetenskaper samt i konstruktionens tillförlitlighet och misslyckande tidanalys. Mana att du är en forskare på ett sjukhus som studerar effektiviteten av en ny behandling för en allmänt terminal sjukdom. Den stora variabelen av intresse är numret av dagar som respektive patienter överlever. I princip kan man använda den vanliga parametriska och icke-parametriska statistiken för desc Ribba den genomsnittliga överlevnaden och för att jämföra den nya behandlingen med traditionella metoder, se Grundstatistik och Nonparametrics och Distribution Fitting. I slutet av studien kommer det dock att finnas patienter som överlevde under hela studieperioden, särskilt bland de patienter som gick in i Sjukhus och forskningsprojektet sent i studien kommer det att finnas andra patienter med vilka vi kommer att ha förlorat kontakt. Säkert skulle man inte vilja utesluta alla dessa patienter från studien genom att förklara att de saknar data eftersom de flesta är överlevande och Därför reflekterar de om framgången med den nya behandlingsmetoden. Dessa observationer, som endast innehåller partiell information, kallas censurerade observationer, t ex patient A överlevde minst 4 månader innan han flyttade bort och vi förlorade kontakten termen censurering användes först av Hald, 1949.Censurerade observationer. Generellt uppstår censurerade observationer när den beroende variabelen av intresse representerar tiden till en terminal Händelse och studiens varaktighet är begränsad i tiden Censurerade observationer kan förekomma inom ett antal olika forskningsområden. Till exempel i samhällsvetenskapen kan vi studera överlevnad av äktenskap, högskolans utfallstider tar tid att släppa ut , Omsättning i organisationer osv. I varje fall vid slutet av studietiden kommer vissa ämnen fortfarande att gifta sig, inte ha tappat ut eller arbetar fortfarande i samma företag, så de ämnena representerar censurerade observationer. I ekonomi vi kan studera överlevnad av nya företag eller överlevnadstider för produkter som bilar I kvalitetskontrollforskning är det vanligt att studera delarnas överlevnad under stressfelstidsanalys. Analytiska tekniker. De metoder som erbjuds i Survival Analysis adresserar de samma forskningsfrågor som många av de andra förfarandena, men alla metoder i Survival Analysis hanterar censurerade data Livstabellen, överlevnadsfördelning och Kaplan-Meier överlevnadsfunktion Uppskattning är alla beskrivande metoder för att uppskatta fördelningen av överlevnadstider från ett prov Flera tekniker är tillgängliga för att jämföra överlevnaden i två eller flera grupper. Slutligen erbjuder överlevnadsanalys flera regressionsmodeller för att uppskatta förhållandet mellan multipla kontinuerliga variabler och överlevnadstider. Analys. Det enklaste sättet att beskriva överlevnaden i ett prov är att beräkna livstabellen Livstabellstekniken är en av de äldsta metoderna för att analysera överlevnadsfelstiddata, t ex se Berkson Gage, 1950 Cutler Ederer, 1958 Gehan, 1969. Denna tabell Kan betraktas som ett förbättrat frekvensdistributionstabell Fördelningen av överlevnadstider är indelad i ett visst antal intervaller. För varje intervall kan vi sedan beräkna antalet och proportionen av fall eller objekt som inmatade respektive intervall vid liv, antal och andel av fall som misslyckades i respektive intervall, dvs antal terminalhändelser eller antal cas Es som dog och antalet fall som förlorades eller censurerades i respektive intervall. Baserat på dessa siffror och proportioner kan flera ytterligare statistik beräknas. Antal fall som är i fara Detta är antalet fall som har gått in i respektive intervall vid liv , Minus hälften av antalet fall som förlorats eller censurerats i respektive intervall. Proportion failure Denna andel beräknas som förhållandet mellan antalet fall som saknar respektive intervall dividerat med antalet riskfall i intervallet. Proportionell överlevnad Denna andel beräknas som 1 minus andelen misslyckas. Kumulativ proportionell överlevnadsöverlevnadsfunktion Detta är den kumulativa andelen fall som överlever upp till respektive intervall. Eftersom sannolikheten för överlevnad antas vara oberoende över intervallen beräknas denna sannolikhet genom att multiplicera sannolikheten för överlevnad över alla tidigare intervaller Den resulterande funktionen kallas också överlevande eller sur Vivalitetsfunktion. Probbarhetsdensitet Det här är den beräknade sannolikheten för fel i respektive intervall, beräknat per tidsenhet, det vill säga. I denna formel är Fi respektive sannolikhetstäthet i det första intervallet, Pi är den uppskattade kumulativa andelen Överleva i början av det i intervallet i slutet av intervallet i-1, är P i 1 den kumulativa andelen som överlever i slutet av det i intervallet och hi är bredden av respektive intervall. Riskfrekvens Riskfrekvensen Termen användes först av Barlow, 1963 definieras som sannolikheten per tidsenhet att ett fall som har överlevt till början av respektive intervall misslyckas i det intervallet. Specifikt beräknas det som antalet misslyckanden per tidsenheter i Respektive intervall dividerat med det genomsnittliga antalet överlevande fall vid intervallets mittpunkt. Median Survival Time Detta är överlevnadstiden vid vilken den kumulativa överlevnadsfunktionen är lika med 0 5 Andra procentuella 25 och 75: e percentilen av den kumulativa överlevnadsfunktionen kan beräknas i enlighet därmed. Notera att den 50: e percentilmedianen för den kumulativa överlevnadsfunktionen vanligtvis inte är densamma som tiden i tid upp till vilken 50 av provet överlevde. Detta skulle bara vara fallet om det inte fanns någon sensur Observationer före denna tid. Förfrågade provstorlekar För att kunna uppnå tillförlitliga uppskattningar av de tre huvudfunktionerna överlevnad, sannolikhetstäthet och fara och deras standardfel vid varje tidsintervall är minsta rekommenderade provstorlek 30. Distribution Fitting. General Introduktion I Sammanfattningsvis ger livstabellen en bra indikation på fördelningen av misslyckanden över tid Men för prediktiva ändamål är det ofta önskvärt att förstå formen av den underliggande överlevnadsfunktionen i befolkningen. De stora fördelningarna som har föreslagits för modellering överlevnad eller misslyckande tider är exponentiell och linjär exponentiell fördelning, Weibullfördelningen av extrema händelser, och Gompertz distributionen. Uppskattning Parametervärden för estimering av parametrarna för de teoretiska överlevnadsfunktionerna är i huvudsak en minsta kvadratisk linjär regressionsalgoritm, se Gehan Siddiqui, 1973 En linjär regressionsalgoritm kan användas eftersom alla fyra teoretiska fördelningar kan göras linjära genom lämplig transformationer Sådana transformationer producerar ibland olika avvikelser för resterna vid olika tidpunkter vilket leder till förutbestämda uppskattningar. Godhet-of-Fit Med parametrarna för olika distributionsfunktioner och respektive modell kan vi beräkna sannolikheten för data. Man kan också beräkna Sannolikheten för data enligt nullmodellen, det vill säga en modell som möjliggör olika farohöjder i varje intervall Utan att gå in i detaljer kan dessa två sannolikheter jämföras med en inkrementell Chi-kvadratsteststatistik Om denna Chi-kvadrat är statistiskt signifikant , Då sluts vi att respektive teoretiska fördelning Passar data väsentligt sämre än nollmodellen som vi förkastar respektive distribution som modell för våra data. Plottar Du kan producera plottar överlevnadsfunktionen, risken och sannolikheten för de observerade data och respektive teoretiska fördelningar. Dessa plotter ge en snabb visuell kontroll av den teoretiska fördelningens godhet. Exemplet nedan visar en observerad överlevnadsfunktion och den utrustade Weibullfördelningen. Specifikt betecknar de tre linjerna i denna plot de teoretiska fördelningar som härrör från tre olika uppskattningsförfaranden minsta kvadrater och två metoder för viktade minsta kvadrater. Kaplan-Meier produktgränsberäkare. I stället för att klassificera de observerade överlevnadstiderna i ett livstabell kan vi uppskatta överlevnadsfunktionen direkt från de kontinuerliga överlevnads - eller misslyckorna Intuitivt, föreställ dig att vi skapar ett livstabell så att varje tidsintervall innehåller exakt ett fall Multiplicera överlevnaden pr Obetalbarhet över intervallet dvs för varje enskild observation vi skulle få för överlevnadsfunktionen. I denna ekvation är S t den uppskattade överlevnadsfunktionen, n är det totala antalet fall och betecknar multipliceringsgeometrisumman över alla fall mindre än eller lika Att tj är en konstant som är antingen 1 om j-fallet är okensorerat och 0 om det är censurerat. Denna uppskattning av överlevnadsfunktionen kallas också produktgränsberäkaren och föreslogs först av Kaplan och Meier 1958 Ett exempel Av denna funktion visas nedan. Fördelen med Kaplan-Meier Product Limit-metoden över livstidsmetoden för analys av överlevnads - och feltiddata är att de resulterande uppskattningarna inte beror på gruppering av data till ett visst antal gånger intervall I själva verket är produktbegränsningsmetoden och livstidsmetoden identiska om livstabellens intervaller innehåller högst en observationsparametrar. Generell inledning Man kan jämföra överlevnaden Ival - eller sviktstider i två eller flera prover I princip, eftersom överlevnadstider normalt inte distribueras, ska icke-parametriska tester som är baserade på rangordning av överlevnadstider tillämpas. Ett stort antal icke-parametriska tester kan användas för att jämföra överlevnadstider Testen kan dock inte hantera censurerade observationer. Tillgängliga test Följande fem olika, mestadels icke-parametriska tester för censurerade data finns tillgängliga Gehan s generalized Wilcoxon test, Cox-Mantel testet, Cox s F testar log-rank testet och Peto och Peto S generalized Wilcoxon test Ett nonparametriskt test för jämförelse av flera grupper är också tillgängligt. De flesta av dessa test åtföljs av lämpliga z-värden värden av standard normalfördelning dessa z - värden kan användas för att testa för statistisk signifikans av eventuella skillnader mellan grupper Observera dock att de flesta av dessa tester endast ger pålitliga resultat med ganska stora provstorlekar det lilla provbeteendet Är mindre väl förstådd. Val av ett prov med två prov Det finns inga allmänt accepterade riktlinjer angående vilket test som ska användas i en viss situation. Cox s F-test tenderar att vara kraftfullare än Gehan s generaliserade Wilcoxon test när. Provstorlekar är små i per grupp mindre än 50. Om proven är från en exponentiell eller Weibull. Om det inte finns några censurerade observationer, se Gehan Thomas, 1969.Lee, Desu och Gehan 1975 jämförde Gehan s test till flera alternativ och visade att Cox-Mantel-testet och log - Ranktest är kraftfullare oavsett censurering när proverna dras från en population som följer en exponentiell eller Weibullfördelning under dessa förhållanden är det liten skillnad mellan Cox-Mantel-testet och log-ranktestet Lee 1980 diskuterar kraften hos olika test I större detalj. Multipleprovstest Det finns ett multipelprovstest som är en förlängning eller generalisering av Gehan s generaliserade Wilcoxon-test, Peto och Petos generella Wilcoxon-test, och Log-rank-testet Först ges en poäng till varje överlevnadstid med hjälp av Mantel s procedur Mantel 1967, därefter beräknas ett Chip-värde baserat på summan för varje grupp av denna poäng. Om endast två grupper specificeras, så testar detta test Motsvarar Gehans generella Wilcoxon-test och beräkningarna kommer som standard till det testet i detta fall. Olika proportioner av censurerade data När man jämför två eller flera grupper är det mycket viktigt att undersöka antalet censurerade observationer i varje grupp. Speciellt inom medicinsk forskning , censurering kan vara resultatet av till exempel tillämpningen av olika behandlingar som blir bättre snabbare eller blir värre som resultat av en behandling kan vara mer benägna att släppa ut ur studien, vilket resulterar i olika antal censurerade observationer i varje Grupp En sådan systematisk censurering kan i hög grad förvirra resultatet av jämförelser. Regression Models. General Introduction. A gemensam forskningsfråga inom medicinsk, biologisk eller teknisk misslyckande tid r Esearch är att bestämma huruvida vissa kontinuerliga oberoende variabler är korrelerade med överlevnads - eller misslyckningstiderna. Det finns två huvudorsaker till varför denna forskningsproblem inte kan lösas via enkla multipla regressionstekniker som finns tillgängliga vid multipelregression först, den beroende variabeln av intresseöverlevnadsfel Tid är sannolikt inte normalt distribuerad - en allvarlig kränkning av antagandet för vanliga minsta kvadrater flera regression Överlevnadstider brukar följa en exponentiell eller Weibull-distribution För det andra finns det ett problem med censurering det vill säga att vissa observationer kommer att vara ofullständiga. Cox s Proportional Riskmodell. Den proportionella riskmodellen är den mest generella av regressionsmodellerna, eftersom den inte bygger på några antaganden om arten eller formen av den underliggande överlevnadsfördelningen. Modellen förutsätter att den underliggande riskfrekvensen i stället för överlevnadstiden är en funktion av oberoende variabler covariates inga antaganden Vi görs om farofunktionens art eller form Således kan Coxs regressionsmodell anses vara en icke-parametrisk metod. Modellen kan skrivas som. Där ht betecknar den resulterande risken med tanke på m-värdena Kovariater för respektive fall z 1 z 2 zm och respektive överlevnadstid t Termen h 0 t kallas grundlinjens fara det är risken för respektive individ när alla oberoende variabla värden är lika med noll. Vi kan linearisera denna modell genom att dela Båda sidor av ekvationen av h 0 t och sedan ta den naturliga logaritmen på båda sidor. Vi har nu en ganska enkel linjär modell som lätt kan uppskattas. Assumptions Medan inga antaganden görs om formen av den underliggande riskfunktionen, ekvationer som visas ovan innebär två antaganden För det första anger de ett multiplikativt förhållande mellan den underliggande riskfunktionen och covariatens loglinjära funktion. Detta antagande kallas också proportionen Jonitetsantagande I praktiken antas det att, med tanke på två observationer med olika värden för de oberoende variablerna, är förhållandet mellan riskfunktionerna för de två observationerna inte beroende av tiden. Det andra antagandet är givetvis att det finns en logg - Linjärt förhållande mellan de oberoende variablerna och den underliggande riskfunktionen. Cox s Proportional Hazard Model med Time-Dependent Covariates. An antagande av den proportionella riskmodellen är att riskfunktionen för en individ, dvs observation i analysen beror på värdena på kovariaterna Och värdet av grundlinjens fara Med tanke på två individer med särskilda värden för kovariaterna kommer förhållandet mellan de uppskattade riskerna över tiden att vara konstant - följaktligen metodens namn den proportionella riskmodellen Giltigheten av detta antagande kan ofta vara tveksamt För Exempel är ålder ofta inkluderad i studier av fysisk hälsa. Antag att du studerade överlevnad efter operation. Det är troligt, Den åldern är en viktigare riskbedömning omedelbart efter operationen än en tid efter operationen efter initialt återhämtning. Vid accelererad livstestning använder man ibland en stresskovariat, t. ex. spänning som ökar långsamt över tiden tills fel uppstår, t ex tills elisoleringen Misslyckas se Lawless, 1982, sidan 393 I detta fall är kovariatets inverkan tydligt beroende av tiden. Användaren kan ange aritmetiska uttryck för att definiera kovariater som funktioner av flera variabler och överlevnadstid. Test av proportionalitetsförmågan Som framgår av tidigare exempel , Det finns många applikationer där det är troligt att proportionalitetsantagandet inte håller. I så fall kan man uttryckligen definiera kovariater som tidsfunktioner. Exempelvis består analysen av en dataset som presenteras av Pike 1966 överlevnadstider för två grupper av råttor som har utsatts för cancerframkallande se även Lawless, 1982, sid 393, för ett liknande exempel Antag att z Är en gruppvariabel med koderna 1 och 0 för att ange huruvida respektive råtta var exponerad. En kan då passa proportionell riskmodell. I denna modell är den villkorliga risken vid tid t en funktion av 1 grundlinjens fara h 0 2 Covariaten z och 3 av z gånger tidens logaritme Observera att konstanten 5 4 används här för skalningsändamål är endast medelvärdet av logaritmen för överlevnadstiderna i denna datasats lika med 5 4 Med andra ord är den villkorliga risken vid varje tidpunkt är en funktion av kovariatet och tiden så är effekten av kovariatet på överlevnad beroende av tiden, varför namnet tidsberoende kovariat Denna modell gör det möjligt för en att specifikt testa proportionalitetsantagandet Om parameter b 2 är statistiskt signifikant eg if it is at least twice as large as its standard error , then one can conclude that, indeed, the effect of the covariate z on survival is dependent on time, and, therefore, that the proportionality assumption does not ho ld. Exponential Regression. Basically, this model assumes that the survival time distribution is exponential, and contingent on the values of a set of independent variables z i The rate parameter of the exponential distribution can then be expressed as. S z denotes the survival times, a is a constant, and the b i s are the regression parameters. Goodness-of-fit The Chi-square goodness-of-fit value is computed as a function of the log-likelihood for the model with all parameter estimates L 1 , and the log-likelihood of the model in which all covariates are forced to 0 zero L 0 If this Chi-square value is significant, we reject the null hypothesis and assume that the independent variables are significantly related to survival times. Standard exponential order statistic One way to check the exponentiality assumption of this model is to plot the residual survival times against the standard exponential order statistic theta If the exponentiality assumption is met, then all points in this plot wi ll be arranged roughly in a straight line. Normal and Log-Normal Regression. In this model, it is assumed that the survival times or log survival times come from a normal distribution the resulting model is basically identical to the ordinary multiple regression model, and may be stated as. where t denotes the survival times For log-normal regression, t is replaced by its natural logarithm The normal regression model is particularly useful because many data sets can be transformed to yield approximations of the normal distribution Thus, in a sense this is the most general fully parametric model as opposed to Cox s proportional hazard model which is non-parametric , and estimates can be obtained for a variety of different underlying survival distributions. Goodness-of-fit The Chi-square value is computed as a function of the log-likelihood for the model with all independent variables L1 , and the log-likelihood of the model in which all independent variables are forced to 0 zero, L0.Stratif ied Analyses. The purpose of a stratified analysis is to test the hypothesis whether identical regression models are appropriate for different groups, that is, whether the relationships between the independent variables and survival are identical in different groups To perform a stratified analysis, one must first fit the respective regression model separately within each group The sum of the log-likelihoods from these analyses represents the log-likelihood of the model with different regression coefficients and intercepts where appropriate in different groups The next step is to fit the requested regression model to all data in the usual manner i e ignoring group membership , and compute the log-likelihood for the overall fit The difference between the log-likelihoods can then be tested for statistical significance via the Chi-square statistic. Was this topic helpful. Feedback Submitted. Text Mining Big Data, Unstructured Data. Text Mining Introductory Overview. The purpose of Text Mining i s to process unstructured textual information, extract meaningful numeric indices from the text, and, thus, make the information contained in the text accessible to the various data mining statistical and machine learning algorithms Information can be extracted to derive summaries for the words contained in the documents or to compute summaries for the documents based on the words contained in them Hence, you can analyze words, clusters of words used in documents, etc or you could analyze documents and determine similarities between them or how they are related to other variables of interest in the data mining project In the most general terms, text mining will turn text into numbers meaningful indices , which can then be incorporated in other analyses such as predictive data mining projects, the application of unsupervised learning methods clustering , etc These methods are described and discussed in great detail in the comprehensive overview work by Manning and Schtze 2002 , and for an in-depth treatment of these and related topics as well as the history of this approach to text mining, we highly recommend that source. Typical Applications for Text Mining. Unstructured text is very common, and in fact may represent the majority of information available to a particular research or data mining project. Analyzing open-ended survey responses In survey research e g marketing , it is not uncommon to include various open-ended questions pertaining to the topic under investigation The idea is to permit respondents to express their views or opinions without constraining them to particular dimensions or a particular response format This may yield insights into customers views and opinions that might otherwise not be discovered when relying solely on structured questionnaires designed by experts For example, you may discover a certain set of words or terms that are commonly used by respondents to describe the pro s and con s of a product or service under investigation , suggest ing common misconceptions or confusion regarding the items in the study. Automatic processing of messages, emails, etc Another common application for text mining is to aid in the automatic classification of texts For example, it is possible to filter out automatically most undesirable junk email based on certain terms or words that are not likely to appear in legitimate messages, but instead identify undesirable electronic mail In this manner, such messages can automatically be discarded Such automatic systems for classifying electronic messages can also be useful in applications where messages need to be routed automatically to the most appropriate department or agency e g email messages with complaints or petitions to a municipal authority are automatically routed to the appropriate departments at the same time, the emails are screened for inappropriate or obscene messages, which are automatically returned to the sender with a request to remove the offending words or content. Analyzing warranty or insurance claims, diagnostic interviews, etc In some business domains, the majority of information is collected in open-ended, textual form For example, warranty claims or initial medical patient interviews can be summarized in brief narratives, or when you take your automobile to a service station for repairs, typically, the attendant will write some notes about the problems that you report and what you believe needs to be fixed Increasingly, those notes are collected electronically, so those types of narratives are readily available for input into text mining algorithms This information can then be usefully exploited to, for example, identify common clusters of problems and complaints on certain automobiles, etc Likewise, in the medical field, open-ended descriptions by patients of their own symptoms might yield useful clues for the actual medical diagnosis. Investigating competitors by crawling their web sites Another type of potentially very useful application is to aut omatically process the contents of Web pages in a particular domain For example, you could go to a Web page, and begin crawling the links you find there to process all Web pages that are referenced In this manner, you could automatically derive a list of terms and documents available at that site, and hence quickly determine the most important terms and features that are described It is easy to see how these capabilities could efficiently deliver valuable business intelligence about the activities of competitors. Approaches to Text Mining. To reiterate, text mining can be summarized as a process of numericizing text At the simplest level, all words found in the input documents will be indexed and counted in order to compute a table of documents and words, i e a matrix of frequencies that enumerates the number of times that each word occurs in each document This basic process can be further refined to exclude certain common words such as the and a stop word lists and to combine different grammatical forms of the same words such as traveling, traveled, travel, etc stemming However, once a table of unique words terms by documents has been derived, all standard statistical and data mining techniques can be applied to derive dimensions or clusters of words or documents, or to identify important words or terms that best predict another outcome variable of interest. Using well-tested methods and understanding the results of text mining Once a data matrix has been computed from the input documents and words found in those documents, various well-known analytic techniques can be used for further processing those data including methods for clustering, factoring, or predictive data mining see, for example, Manning and Schtze, 2002. Black-box approaches to text mining and extraction of concepts There are text mining applications which offer black-box methods to extract deep meaning from documents with little human effort to first read and understand those documents These text mini ng applications rely on proprietary algorithms for presumably extracting concepts from text, and may even claim to be able to summarize large numbers of text documents automatically, retaining the core and most important meaning of those documents While there are numerous algorithmic approaches to extracting meaning from documents, this type of technology is very much still in its infancy, and the aspiration to provide meaningful automated summaries of large numbers of documents may forever remain elusive We urge skepticism when using such algorithms because 1 if it is not clear to the user how those algorithms work, it cannot possibly be clear how to interpret the results of those algorithms, and 2 the methods used in those programs are not open to scrutiny, for example by the academic community and peer review and, hence, we simply don t know how well they might perform in different domains As a final thought on this subject, you may consider this concrete example Try the various aut omated translation services available via the Web that can translate entire paragraphs of text from one language into another Then translate some text, even simple text, from your native language to some other language and back, and review the results Almost every time, the attempt to translate even short sentences to other languages and back while retaining the original meaning of the sentence produces humorous rather than accurate results This illustrates the difficulty of automatically interpreting the meaning of text. Text mining as document search There is another type of application that is often described and referred to as text mining - the automatic search of large numbers of documents based on key words or key phrases This is the domain of, for example, the popular internet search engines that have been developed over the last decade to provide efficient access to Web pages with certain content While this is obviously an important type of application with many uses in any orga nization that needs to search very large document repositories based on varying criteria, it is very different from what has been described here. Issues and Considerations for Numericizing Text. Large numbers of small documents vs small numbers of large documents Examples of scenarios using large numbers of small or moderate sized documents were given earlier e g analyzing warranty or insurance claims, diagnostic interviews, etc On the other hand, if your intent is to extract concepts from only a few documents that are very large e g two lengthy books , then statistical analyses are generally less powerful because the number of cases documents in this case is very small while the number of variables extracted words is very large. Excluding certain characters, short words, numbers, etc Excluding numbers, certain characters, or sequences of characters, or words that are shorter or longer than a certain number of letters can be done before the indexing of the input documents starts You may a lso want to exclude rare words, defined as those that only occur in a small percentage of the processed documents. Include lists, exclude lists stop-words Specific list of words to be indexed can be defined this is useful when you want to search explicitly for particular words, and classify the input documents based on the frequencies with which those words occur Also, stop-words, i e terms that are to be excluded from the indexing can be defined Typically, a default list of English stop words includes the , a , of , since, etc, i e words that are used in the respective language very frequently, but communicate very little unique information about the contents of the document. Synonyms and phrases Synonyms, such as sick or ill , or words that are used in particular phrases where they denote unique meaning can be combined for indexing For example, Microsoft Windows might be such a phrase, which is a specific reference to the computer operating system, but has nothing to do with the common use of the term Windows as it might, for example, be used in descriptions of home improvement projects. Stemming algorithms An important pre-processing step before indexing of input documents begins is the stemming of words The term stemming refers to the reduction of words to their roots so that, for example, different grammatical forms or declinations of verbs are identified and indexed counted as the same word For example, stemming will ensure that both traveling and traveled will be recognized by the text mining program as the same word. Support for different languages Stemming, synonyms, the letters that are permitted in words, etc are highly language dependent operations Therefore, support for different languages is important. Transforming Word Frequencies. Once the input documents have been indexed and the initial word frequencies by document computed, a number of additional transformations can be performed to summarize and aggregate the information that was extracted. Log-frequenci es First, various transformations of the frequency counts can be performed The raw word or term frequencies generally reflect on how salient or important a word is in each document Specifically, words that occur with greater frequency in a document are better descriptors of the contents of that document However, it is not reasonable to assume that the word counts themselves are proportional to their importance as descriptors of the documents For example, if a word occurs 1 time in document A, but 3 times in document B, then it is not necessarily reasonable to conclude that this word is 3 times as important a descriptor of document B as compared to document A Thus, a common transformation of the raw word frequency counts wf is to compute. f wf 1 log wf , for wf 0.This transformation will dampen the raw frequencies and how they will affect the results of subsequent computations. Binary frequencies Likewise, an even simpler transformation can be used that enumerates whether a term is used i n a document i e. f wf 1, for wf 0.The resulting documents-by-words matrix will contain only 1s and 0s to indicate the presence or absence of the respective words Again, this transformation will dampen the effect of the raw frequency counts on subsequent computations and analyses. Inverse document frequencies Another issue that you may want to consider more carefully and reflect in the indices used in further analyses are the relative document frequencies df of different words For example, a term such as guess may occur frequently in all documents, while another term such as software may only occur in a few The reason is that we might make guesses in various contexts, regardless of the specific topic, while software is a more semantically focused term that is only likely to occur in documents that deal with computer software A common and very useful transformation that reflects both the specificity of words document frequencies as well as the overall frequencies of their occurrences word frequencies is the so-called inverse document frequency for the i th word and j th document. In this formula see also formula 15 5 in Manning and Schtze, 2002 , N is the total number of documents, and dfi is the document frequency for the i th word the number of documents that include this word Hence, it can be seen that this formula includes both the dampening of the simple word frequencies via the log function described above , and also includes a weighting factor that evaluates to 0 if the word occurs in all documents log N N 1 0 and to the maximum value when a word only occurs in a single document log N 1 log N It can easily be seen how this transformation will create indices that both reflect the relative frequencies of occurrences of words, as well as their semantic specificities over the documents included in the analysis. Latent Semantic Indexing via Singular Value Decomposition. As described above, the most basic result of the initial indexing of words found in the input documen ts is a frequency table with simple counts, i e the number of times that different words occur in each input document Usually, we would transform those raw counts to indices that better reflect the relative importance of words and or their semantic specificity in the context of the set of input documents see the discussion of inverse document frequencies, above. A common analytic tool for interpreting the meaning or semantic space described by the words that were extracted, and hence by the documents that were analyzed, is to create a mapping of the word and documents into a common space, computed from the word frequencies or transformed word frequencies e g inverse document frequencies In general, here is how it works. Suppose you indexed a collection of customer reviews of their new automobiles e g for different makes and models You may find that every time a review includes the word gas-mileage, it also includes the term economy Further, when reports include the word reliability they also include the term defects e g make reference to no defects However, there is no consistent pattern regarding the use of the terms economy and reliability, i e some documents include either one or both In other words, these four words gas-mileage and economy, and reliability and defects, describe two independent dimensions - the first having to do with the overall operating cost of the vehicle, the other with the quality and workmanship The idea of latent semantic indexing is to identify such underlying dimensions of meaning , into which the words and documents can be mapped As a result, we may identify the underlying latent themes described or discussed in the input documents, and also identify the documents that mostly deal with economy, reliability, or both Hence, we want to map the extracted words or terms and input documents into a common latent semantic space. Singular value decomposition The use of singular value decomposition in order to extract a common space for the variabl es and cases observations is used in various statistical techniques, most notably in Correspondence Analysis The technique is also closely related to Principal Components Analysis and Factor Analysis In general, the purpose of this technique is to reduce the overall dimensionality of the input matrix number of input documents by number of extracted words to a lower-dimensional space, where each consecutive dimension represents the largest degree of variability between words and documents possible Ideally, you might identify the two or three most salient dimensions, accounting for most of the variability differences between the words and documents and, hence, identify the latent semantic space that organizes the words and documents in the analysis In some way, once such dimensions can be identified, you have extracted the underlying meaning of what is contained discussed, described in the documents. Incorporating Text Mining Results in Data Mining Projects. After significant e g frequent words have been extracted from a set of input documents, and or after singular value decomposition has been applied to extract salient semantic dimensions, typically the next and most important step is to use the extracted information in a data mining project. Graphics visual data mining methods Depending on the purpose of the analyses, in some instances the extraction of semantic dimensions alone can be a useful outcome if it clarifies the underlying structure of what is contained in the input documents For example, a study of new car owners comments about their vehicles may uncover the salient dimensions in the minds of those drivers when they think about or consider their automobile or how they feel about it For marketing research purposes, that in itself can be a useful and significant result You can use the graphics e g 2D scatterplots or 3D scatterplots to help you visualize and identify the semantic space extracted from the input documents. Clustering and factoring You can use clu ster analysis methods to identify groups of documents e g vehicle owners who described their new cars , to identify groups of similar input texts This type of analysis also could be extremely useful in the context of market research studies, for example of new car owners You can also use Factor Analysis and Principal Components and Classification Analysis to factor analyze words or documents. Predictive data mining Another possibility is to use the raw or transformed word counts as predictor variables in predictive data mining projects. Was this topic helpful. Thank you We appreciate your feedback. Time Series Analysis. How To Identify Patterns in Time Series Data Time Series Analysis. In the following topics, we will first review techniques used to identify patterns in time series data such as smoothing and curve fitting techniques and autocorrelations , then we will introduce a general class of models that can be used to represent time series data and generate predictions autoregressive an d moving average models Finally, we will review some simple but commonly used modeling and forecasting techniques based on linear regression For more information see the topics below. General Introduction. In the following topics, we will review techniques that are useful for analyzing time series data, that is, sequences of measurements that follow non-random orders Unlike the analyses of random samples of observations that are discussed in the context of most other statistics, the analysis of time series is based on the assumption that successive values in the data file represent consecutive measurements taken at equally spaced time intervals. Detailed discussions of the methods described in this section can be found in Anderson 1976 , Box and Jenkins 1976 , Kendall 1984 , Kendall and Ord 1990 , Montgomery, Johnson, and Gardiner 1990 , Pankratz 1983 , Shumway 1988 , Vandaele 1983 , Walker 1991 , and Wei 1989.Two Main Goals. There are two main goals of time series analysis a identifying t he nature of the phenomenon represented by the sequence of observations, and b forecasting predicting future values of the time series variable Both of these goals require that the pattern of observed time series data is identified and more or less formally described Once the pattern is established, we can interpret and integrate it with other data i e use it in our theory of the investigated phenomenon, e g seasonal commodity prices Regardless of the depth of our understanding and the validity of our interpretation theory of the phenomenon, we can extrapolate the identified pattern to predict future events. Identifying Patterns in Time Series Data. For more information on simple autocorrelations introduced in this section and other auto correlations, see Anderson 1976 , Box and Jenkins 1976 , Kendall 1984 , Pankratz 1983 , and Vandaele 1983 See also. Systematic Pattern and Random Noise. As in most other analyses, in time series analysis it is assumed that the data consist of a systematic pattern usually a set of identifiable components and random noise error which usually makes the pattern difficult to identify Most time series analysis techniques involve some form of filtering out noise in order to make the pattern more salient. Two General Aspects of Time Series Patterns. Most time series patterns can be described in terms of two basic classes of components trend and seasonality The former represents a general systematic linear or most often nonlinear component that changes over time and does not repeat or at least does not repeat within the time range captured by our data e g a plateau followed by a period of exponential growth The latter may have a formally similar nature e g a plateau followed by a period of exponential growth , however, it repeats itself in systematic intervals over time Those two general classes of time series components may coexist in real-life data For example, sales of a company can rapidly grow over years but they still follow consistent seaso nal patterns e g as much as 25 of yearly sales each year are made in December, whereas only 4 in August. This general pattern is well illustrated in a classic Series G data set Box and Jenkins, 1976, p 531 representing monthly international airline passenger totals measured in thousands in twelve consecutive years from 1949 to 1960 see example data file and graph above If you plot the successive observations months of airline passenger totals, a clear, almost linear trend emerges, indicating that the airline industry enjoyed a steady growth over the years approximately 4 times more passengers traveled in 1960 than in 1949 At the same time, the monthly figures will follow an almost identical pattern each year e g more people travel during holidays than during any other time of the year This example data file also illustrates a very common general type of pattern in time series data, where the amplitude of the seasonal changes increases with the overall trend i e the variance is correlate d with the mean over the segments of the series This pattern which is called multiplicative seasonality indicates that the relative amplitude of seasonal changes is constant over time, thus it is related to the trend. Trend Analysis. There are no proven automatic techniques to identify trend components in the time series data however, as long as the trend is monotonous consistently increasing or decreasing that part of data analysis is typically not very difficult If the time series data contain considerable error, then the first step in the process of trend identification is smoothing. Smoothing Smoothing always involves some form of local averaging of data such that the nonsystematic components of individual observations cancel each other out The most common technique is moving average smoothing which replaces each element of the series by either the simple or weighted average of n surrounding elements, where n is the width of the smoothing window see Box Jenkins, 1976 Velleman Hoaglin, 1981 Medians can be used instead of means The main advantage of median as compared to moving average smoothing is that its results are less biased by outliers within the smoothing window Thus, if there are outliers in the data e g due to measurement errors , median smoothing typically produces smoother or at least more reliable curves than moving average based on the same window width The main disadvantage of median smoothing is that in the absence of clear outliers it may produce more jagged curves than moving average and it does not allow for weighting. In the relatively less common cases in time series data , when the measurement error is very large, the distance weighted least squares smoothing or negative exponentially weighted smoothing techniques can be used All those methods will filter out the noise and convert the data into a smooth curve that is relatively unbiased by outliers see the respective sections on each of those methods for more details Series with relatively few an d systematically distributed points can be smoothed with bicubic splines. Fitting a function Many monotonous time series data can be adequately approximated by a linear function if there is a clear monotonous nonlinear component, the data first need to be transformed to remove the nonlinearity Usually a logarithmic, exponential, or less often polynomial function can be used. Analysis of Seasonality. Seasonal dependency seasonality is another general component of the time series pattern The concept was illustrated in the example of the airline passengers data above It is formally defined as correlational dependency of order k between each i th element of the series and the i-k th element Kendall, 1976 and measured by autocorrelation i e a correlation between the two terms k is usually called the lag If the measurement error is not too large, seasonality can be visually identified in the series as a pattern that repeats every k elements. Autocorrelation correlogram Seasonal patterns of time series can be examined via correlograms The correlogram autocorrelogram displays graphically and numerically the autocorrelation function ACF , that is, serial correlation coefficients and their standard errors for consecutive lags in a specified range of lags e g 1 through 30 Ranges of two standard errors for each lag are usually marked in correlograms but typically the size of auto correlation is of more interest than its reliability see Elementary Concepts because we are usually interested only in very strong and thus highly significant autocorrelations. Examining correlograms While examining correlograms, you should keep in mind that autocorrelations for consecutive lags are formally dependent Consider the following example If the first element is closely related to the second, and the second to the third, then the first element must also be somewhat related to the third one, etc This implies that the pattern of serial dependencies can change considerably after removing the first o rder auto correlation i e after differencing the series with a lag of 1.Partial autocorrelations Another useful method to examine serial dependencies is to examine the partial autocorrelation function PACF - an extension of autocorrelation, where the dependence on the intermediate elements those within the lag is removed In other words the partial autocorrelation is similar to autocorrelation, except that when calculating it, the auto correlations with all the elements within the lag are partialled out Box Jenkins, 1976 see also McDowall, McCleary, Meidinger, Hay, 1980 If a lag of 1 is specified i e there are no intermediate elements within the lag , then the partial autocorrelation is equivalent to auto correlation In a sense, the partial autocorrelation provides a cleaner picture of serial dependencies for individual lags not confounded by other serial dependencies. Removing serial dependency Serial dependency for a particular lag of k can be removed by differencing the series, that i s converting each i th element of the series into its difference from the i-k th element There are two major reasons for such transformations. First, we can identify the hidden nature of seasonal dependencies in the series Remember that, as mentioned in the previous paragraph, autocorrelations for consecutive lags are interdependent Therefore, removing some of the autocorrelations will change other auto correlations, that is, it may eliminate them or it may make some other seasonalities more apparent. The other reason for removing seasonal dependencies is to make the series stationary which is necessary for ARIMA and other techniques. For more information on Time Series methods, see also. General Introduction. The modeling and forecasting procedures discussed in Identifying Patterns in Time Series Data involved knowledge about the mathematical model of the process However, in real-life research and practice, patterns of the data are unclear, individual observations involve considerable erro r, and we still need not only to uncover the hidden patterns in the data but also generate forecasts The ARIMA methodology developed by Box and Jenkins 1976 allows us to do just that it has gained enormous popularity in many areas and research practice confirms its power and flexibility Hoff, 1983 Pankratz, 1983 Vandaele, 1983 However, because of its power and flexibility, ARIMA is a complex technique it is not easy to use, it requires a great deal of experience, and although it often produces satisfactory results, those results depend on the researcher s level of expertise Bails Peppers, 1982 The following sections will introduce the basic ideas of this methodology For those interested in a brief, applications-oriented non - mathematical , introduction to ARIMA methods, we recommend McDowall, McCleary, Meidinger, and Hay 1980.Two Common Processes. Autoregressive process Most time series consist of elements that are serially dependent in the sense that you can estimate a coefficient or a set of coefficients that describe consecutive elements of the series from specific, time-lagged previous elements This can be summarized in the equation x t 1 x t-1 2 x t-2 3 x t-3.is a constant intercept , and 1 2 3 are the autoregressive model parameters. Put into words, each observation is made up of a random error component random shock, and a linear combination of prior observations. Stationarity requirement Note that an autoregressive process will only be stable if the parameters are within a certain range for example, if there is only one autoregressive parameter then is must fall within the interval of -1 1 Otherwise, past effects would accumulate and the values of successive x t s would move towards infinity, that is, the series would not be stationary If there is more than one autoregressive parameter, similar general restrictions on the parameter values can be defined e g see Box Jenkins, 1976 Montgomery, 1990.Moving average process Independent from the autoregressive process , each element in the series can also be affected by the past error or random shock that cannot be accounted for by the autoregressive component, that is. Where is a constant, and 1 2 3 are the moving average model parameters. Put into words, each observation is made up of a random error component random shock, and a linear combination of prior random shocks. Invertibility requirement Without going into too much detail, there is a duality between the moving average process and the autoregressive process e g see Box Jenkins, 1976 Montgomery, Johnson, Gardiner, 1990 , that is, the moving average equation above can be rewritten inverted into an autoregressive form of infinite order However, analogous to the stationarity condition described above, this can only be done if the moving average parameters follow certain conditions, that is, if the model is invertible Otherwise, the series will not be stationary. ARIMA Methodology. Autoregressive moving average model The general model introduced by Box and Jenkins 1976 includes autoregressive as well as moving average parameters, and explicitly includes differencing in the formulation of the model Specifically, the three types of parameters in the model are the autoregressive parameters p , the number of differencing passes d , and moving average parameters q In the notation introduced by Box and Jenkins, models are summarized as ARIMA p, d, q so, for example, a model described as 0, 1, 2 means that it contains 0 zero autoregressive p parameters and 2 moving average q parameters which were computed for the series after it was differenced once. Identification As mentioned earlier, the input series for ARIMA needs to be stationary that is, it should have a constant mean, variance, and autocorrelation through time Therefore, usually the series first needs to be differenced until it is stationary this also often requires log transforming the data to stabilize the variance The number of times the series needs to be differenced to ach ieve stationarity is reflected in the d parameter see the previous paragraph In order to determine the necessary level of differencing, you should examine the plot of the data and autocorrelogram Significant changes in level strong upward or downward changes usually require first order non seasonal lag 1 differencing strong changes of slope usually require second order non seasonal differencing Seasonal patterns require respective seasonal differencing see below If the estimated autocorrelation coefficients decline slowly at longer lags, first order differencing is usually needed However, you should keep in mind that some time series may require little or no differencing, and that over differenced series produce less stable coefficient estimates. At this stage which is usually called Identification phase, see below we also need to decide how many autoregressive p and moving average q parameters are necessary to yield an effective but still parsimonious model of the process parsimonious means that it has the fewest parameters and greatest number of degrees of freedom among all models that fit the data In practice, the numbers of the p or q parameters very rarely need to be greater than 2 see below for more specific recommendations. Estimation and Forecasting At the next step Estimation , the parameters are estimated using function minimization procedures, see below for more information on minimization procedures see also Nonlinear Estimation , so that the sum of squared residuals is minimized The estimates of the parameters are used in the last stage Forecasting to calculate new values of the series beyond those included in the input data set and confidence intervals for those predicted values The estimation process is performed on transformed differenced data before the forecasts are generated, the series needs to be integrated integration is the inverse of differencing so that the forecasts are expressed in values compatible with the input data This automatic integra tion feature is represented by the letter I in the name of the methodology ARIMA Auto-Regressive Integrated Moving Average. The constant in ARIMA models In addition to the standard autoregressive and moving average parameters, ARIMA models may also include a constant, as described above The interpretation of a statistically significant constant depends on the model that is fit Specifically, 1 if there are no autoregressive parameters in the model, then the expected value of the constant is , the mean of the series 2 if there are autoregressive parameters in the series, then the constant represents the intercept If the series is differenced, then the constant represents the mean or intercept of the differenced series For example, if the series is differenced once, and there are no autoregressive parameters in the model, then the constant represents the mean of the differenced series, and therefore the linear trend slope of the un-differenced series. Identification Phase. Number of paramete rs to be estimated Before the estimation can begin, we need to decide on identify the specific number and type of ARIMA parameters to be estimated The major tools used in the identification phase are plots of the series, correlograms of auto correlation ACF , and partial autocorrelation PACF The decision is not straightforward and in less typical cases requires not only experience but also a good deal of experimentation with alternative models as well as the technical parameters of ARIMA However, a majority of empirical time series patterns can be sufficiently approximated using one of the 5 basic models that can be identified based on the shape of the autocorrelogram ACF and partial auto correlogram PACF The following brief summary is based on practical recommendations of Pankratz 1983 for additional practical advice, see also Hoff 1983 , McCleary and Hay 1980 , McDowall, McCleary, Meidinger, and Hay 1980 , and Vandaele 1983 Also, note that since the number of parameters to be estima ted of each kind is almost never greater than 2, it is often practical to try alternative models on the same data. One autoregressive p parameter ACF - exponential decay PACF - spike at lag 1, no correlation for other lags. Two autoregressive p parameters ACF - a sine-wave shape pattern or a set of exponential decays PACF - spikes at lags 1 and 2, no correlation for other lags. One moving average q parameter ACF - spike at lag 1, no correlation for other lags PACF - damps out exponentially. Two moving average q parameters ACF - spikes at lags 1 and 2, no correlation for other lags PACF - a sine-wave shape pattern or a set of exponential decays. One autoregressive p and one moving average q parameter ACF - exponential decay starting at lag 1 PACF - exponential decay starting at lag 1.Seasonal models Multiplicative seasonal ARIMA is a generalization and extension of the method introduced in the previous paragraphs to series in which a pattern repeats seasonally over time In addition to the no n-seasonal parameters, seasonal parameters for a specified lag established in the identification phase need to be estimated Analogous to the simple ARIMA parameters, these are seasonal autoregressive ps , seasonal differencing ds , and seasonal moving average parameters qs For example, the model 0,1,2 0,1,1 describes a model that includes no autoregressive parameters, 2 regular moving average parameters and 1 seasonal moving average parameter, and these parameters were computed for the series after it was differenced once with lag 1, and once seasonally differenced The seasonal lag used for the seasonal parameters is usually determined during the identification phase and must be explicitly specified. The general recommendations concerning the selection of parameters to be estimated based on ACF and PACF also apply to seasonal models The main difference is that in seasonal series, ACF and PACF will show sizable coefficients at multiples of the seasonal lag in addition to their overall pa tterns reflecting the non seasonal components of the series. Parameter Estimation. There are several different methods for estimating the parameters All of them should produce very similar estimates, but may be more or less efficient for any given model In general, during the parameter estimation phase a function minimization algorithm is used the so-called quasi-Newton method refer to the description of the Nonlinear Estimation method to maximize the likelihood probability of the observed series, given the parameter values In practice, this requires the calculation of the conditional sums of squares SS of the residuals, given the respective parameters Different methods have been proposed to compute the SS for the residuals 1 the approximate maximum likelihood method according to McLeod and Sales 1983 , 2 the approximate maximum likelihood method with backcasting, and 3 the exact maximum likelihood method according to Melard 1984parison of methods In general, all methods should yield ver y similar parameter estimates Also, all methods are about equally efficient in most real-world time series applications However, method 1 above, approximate maximum likelihood, no backcasts is the fastest, and should be used in particular for very long time series e g with more than 30,000 observations Melard s exact maximum likelihood method number 3 above may also become inefficient when used to estimate parameters for seasonal models with long seasonal lags e g with yearly lags of 365 days On the other hand, you should always use the approximate maximum likelihood method first in order to establish initial parameter estimates that are very close to the actual final values thus, usually only a few iterations with the exact maximum likelihood method 3 above are necessary to finalize the parameter estimates. Parameter standard errors For all parameter estimates, you will compute so-called asymptotic standard errors These are computed from the matrix of second-order partial derivatives t hat is approximated via finite differencing see also the respective discussion in Nonlinear Estimation. Penalty value As mentioned above, the estimation procedure requires that the conditional sums of squares of the ARIMA residuals be minimized If the model is inappropriate, it may happen during the iterative estimation process that the parameter estimates become very large, and, in fact, invalid In that case, it will assign a very large value a so-called penalty value to the SS This usually entices the iteration process to move the parameters away from invalid ranges However, in some cases even this strategy fails, and you may see on the screen during the Estimation procedure very large values for the SS in consecutive iterations In that case, carefully evaluate the appropriateness of your model If your model contains many parameters, and perhaps an intervention component see below , you may try again with different parameter start values. Evaluation of the Model. Parameter estimates You will report approximate t values, computed from the parameter standard errors see above If not significant, the respective parameter can in most cases be dropped from the model without affecting substantially the overall fit of the model. Other quality criteria Another straightforward and common measure of the reliability of the model is the accuracy of its forecasts generated based on partial data so that the forecasts can be compared with known original observations. However, a good model should not only provide sufficiently accurate forecasts, it should also be parsimonious and produce statistically independent residuals that contain only noise and no systematic components e g the correlogram of residuals should not reveal any serial dependencies A good test of the model is a to plot the residuals and inspect them for any systematic trends, and b to examine the autocorrelogram of residuals there should be no serial dependency between residuals. Analysis of residuals The major concern here is that the residuals are systematically distributed across the series e g they could be negative in the first part of the series and approach zero in the second part or that they contain some serial dependency which may suggest that the ARIMA model is inadequate The analysis of ARIMA residuals constitutes an important test of the model The estimation procedure assumes that the residual are not auto - correlated and that they are normally distributed. Limitations The ARIMA method is appropriate only for a time series that is stationary i e its mean, variance, and autocorrelation should be approximately constant through time and it is recommended that there are at least 50 observations in the input data It is also assumed that the values of the estimated parameters are constant throughout the series. Interrupted Time Series ARIMA. A common research questions in time series analysis is whether an outside event affected subsequent observations For example, did the implementation of a new economic policy improve economic performance did a new anti-crime law affect subsequent crime rates and so on In general, we would like to evaluate the impact of one or more discrete events on the values in the time series This type of interrupted time series analysis is described in detail in McDowall, McCleary, Meidinger, Hay 1980 McDowall, et al distinguish between three major types of impacts that are possible 1 permanent abrupt, 2 permanent gradual, and 3 abrupt temporary See also. Exponential Smoothing. General Introduction. Exponential smoothing has become very popular as a forecasting method for a wide variety of time series data Historically, the method was independently developed by Brown and Holt Brown worked for the US Navy during World War II, where his assignment was to design a tracking system for fire-control information to compute the location of submarines Later, he applied this technique to the forecasting of demand for spare parts an inventory control problem He descr ibed those ideas in his 1959 book on inventory control Holt s research was sponsored by the Office of Naval Research independently, he developed exponential smoothing models for constant processes, processes with linear trends, and for seasonal data. Gardner 1985 proposed a unified classification of exponential smoothing methods Excellent introductions can also be found in Makridakis, Wheelwright, and McGee 1983 , Makridakis and Wheelwright 1989 , Montgomery, Johnson, Gardiner 1990.Simple Exponential Smoothing. A simple and pragmatic model for a time series would be to consider each observation as consisting of a constant b and an error component epsilon , that is X t b t The constant b is relatively stable in each segment of the series, but may change slowly over time If appropriate, then one way to isolate the true value of b and thus the systematic or predictable part of the series, is to compute a kind of moving average, where the current and immediately preceding younger observation s are assigned greater weight than the respective older observations Simple exponential smoothing accomplishes exactly such weighting, where exponentially smaller weights are assigned to older observations The specific formula for simple exponential smoothing is. When applied recursively to each successive observation in the series, each new smoothed value forecast is computed as the weighted average of the current observation and the previous smoothed observation the previous smoothed observation was computed in turn from the previous observed value and the smoothed value before the previous observation, and so on Thus, in effect, each smoothed value is the weighted average of the previous observations, where the weights decrease exponentially depending on the value of parameter alpha If is equal to 1 one then the previous observations are ignored entirely if is equal to 0 zero , then the current observation is ignored entirely, and the smoothed value consists entirely of the previous smoothed value which in turn is computed from the smoothed observation before it, and so on thus all smoothed values will be equal to the initial smoothed value S 0 Values of in-between will produce intermediate results. Even though significant work has been done to study the theoretical properties of simple and complex exponential smoothing e g see Gardner, 1985 Muth, 1960 see also McKenzie, 1984, 1985 , the method has gained popularity mostly because of its usefulness as a forecasting tool For example, empirical research by Makridakis et al 1982, Makridakis, 1983 , has shown simple exponential smoothing to be the best choice for one-period-ahead forecasting, from among 24 other time series methods and using a variety of accuracy measures see also Gross and Craig, 1974, for additional empirical evidence Thus, regardless of the theoretical model for the process underlying the observed time series, simple exponential smoothing will often produce quite accurate forecasts. Choosing the Best Value for Parameter alpha. Gardner 1985 discusses various theoretical and empirical arguments for selecting an appropriate smoothing parameter Obviously, looking at the formula presented above, should fall into the interval between 0 zero and 1 although, see Brenner et al 1968, for an ARIMA perspective, implying 0 2 Gardner 1985 reports that among practitioners, an smaller than 30 is usually recommended However, in the study by Makridakis et al 1982 , values above 30 frequently yielded the best forecasts After reviewing the literature on this topic, Gardner 1985 concludes that it is best to estimate an optimum from the data see below , rather than to guess and set an artificially low value. Estimating the best value from the data In practice, the smoothing parameter is often chosen by a grid search of the parameter space that is, different solutions for are tried starting, for example, with 0 1 to 0 9, with increments of 0 1 Then is chosen so as to produce the smallest sums of squares o r mean squares for the residuals i e observed values minus one-step-ahead forecasts this mean squared error is also referred to as ex post mean squared error, ex post MSE for short. Indices of Lack of Fit Error. The most straightforward way of evaluating the accuracy of the forecasts based on a particular value is to simply plot the observed values and the one-step-ahead forecasts This plot can also include the residuals scaled against the right Y - axis , so that regions of better or worst fit can also easily be identified. This visual check of the accuracy of forecasts is often the most powerful method for determining whether or not the current exponential smoothing model fits the data In addition, besides the ex post MSE criterion see previous paragraph , there are other statistical measures of error that can be used to determine the optimum parameter see Makridakis, Wheelwright, and McGee, 1983.Mean error The mean error ME value is simply computed as the average error value average of observed minus one-step-ahead forecast Obviously, a drawback of this measure is that positive and negative error values can cancel each other out, so this measure is not a very good indicator of overall fit. Mean absolute error The mean absolute error MAE value is computed as the average absolute error value If this value is 0 zero , the fit forecast is perfect As compared to the mean squared error value, this measure of fit will de-emphasize outliers, that is, unique or rare large error values will affect the MAE less than the MSE value. Sum of squared error SSE , Mean squared error These values are computed as the sum or average of the squared error values This is the most commonly used lack-of-fit indicator in statistical fitting procedures. Percentage error PE All the above measures rely on the actual error value It may seem reasonable to rather express the lack of fit in terms of the relative deviation of the one-step-ahead forecasts from the observed values, that is, relative to the magnitude of the observed values For example, when trying to predict monthly sales that may fluctuate widely e g seasonally from month to month, we may be satisfied if our prediction hits the target with about 10 accuracy In other words, the absolute errors may be not so much of interest as are the relative errors in the forecasts To assess the relative error, various indices have been proposed see Makridakis, Wheelwright, and McGee, 1983 The first one, the percentage error value, is computed as. where X t is the observed value at time t and F t is the forecasts smoothed values. Mean percentage error MPE This value is computed as the average of the PE values. Mean absolute percentage error MAPE As is the case with the mean error value ME, see above , a mean percentage error near 0 zero can be produced by large positive and negative percentage errors that cancel each other out Thus, a better measure of relative overall fit is the mean absolute percentage error Also, this measure is usuall y more meaningful than the mean squared error For example, knowing that the average forecast is off by 5 is a useful result in and of itself, whereas a mean squared error of 30 8 is not immediately interpretable. Automatic search for best parameter A quasi-Newton function minimization procedure the same as in ARIMA is used to minimize either the mean squared error, mean absolute error, or mean absolute percentage error In most cases, this procedure is more efficient than the grid search particularly when more than one parameter must be determined , and the optimum parameter can quickly be identified. The first smoothed value S 0 A final issue that we have neglected up to this point is the problem of the initial value, or how to start the smoothing process If you look back at the formula above, it is evident that you need an S 0 value in order to compute the smoothed value forecast for the first observation in the series Depending on the choice of the parameter i e when is close to zero , the initial value for the smoothing process can affect the quality of the forecasts for many observations As with most other aspects of exponential smoothing it is recommended to choose the initial value that produces the best forecasts On the other hand, in practice, when there are many leading observations prior to a crucial actual forecast, the initial value will not affect that forecast by much, since its effect will have long faded from the smoothed series due to the exponentially decreasing weights, the older an observation the less it will influence the forecast. Seasonal and Non-Seasonal Models With or Without Trend. The discussion above in the context of simple exponential smoothing introduced the basic procedure for identifying a smoothing parameter, and for evaluating the goodness-of-fit of a model In addition to simple exponential smoothing, more complex models have been developed to accommodate time series with seasonal and trend components The general idea here is that for ecasts are not only computed from consecutive previous observations as in simple exponential smoothing , but an independent smoothed trend and seasonal component can be added Gardner 1985 discusses the different models in terms of seasonality none, additive, or multiplicative and trend none, linear, exponential, or damped. Additive and multiplicative seasonality Many time series data follow recurring seasonal patterns For example, annual sales of toys will probably peak in the months of November and December, and perhaps during the summer with a much smaller peak when children are on their summer break This pattern will likely repeat every year, however, the relative amount of increase in sales during December may slowly change from year to year Thus, it may be useful to smooth the seasonal component independently with an extra parameter, usually denoted as delta. Seasonal components can be additive in nature or multiplicative For example, during the month of December the sales for a par ticular toy may increase by 1 million dollars every year Thus, we could add to our forecasts for every December the amount of 1 million dollars over the respective annual average to account for this seasonal fluctuation In this case, the seasonality is additive. Alternatively, during the month of December the sales for a particular toy may increase by 40 , that is, increase by a factor of 1 4 Thus, when the sales for the toy are generally weak, than the absolute dollar increase in sales during December will be relatively weak but the percentage will be constant if the sales of the toy are strong, than the absolute dollar increase in sales will be proportionately greater Again, in this case the sales increase by a certain factor and the seasonal component is thus multiplicative in nature i e the multiplicative seasonal component in this case would be 1 4.In plots of the series, the distinguishing characteristic between these two types of seasonal components is that in the additive case, the series shows steady seasonal fluctuations, regardless of the overall level of the series in the multiplicative case, the size of the seasonal fluctuations vary, depending on the overall level of the series. The seasonal smoothing parameter In general the one-step-ahead forecasts are computed as for no trend models, for linear and exponential trend models a trend component is added to the model see below. In this formula, S t stands for the simple exponentially smoothed value of the series at time t and I t-p stands for the smoothed seasonal factor at time t minus p the length of the season Thus, compared to simple exponential smoothing, the forecast is enhanced by adding or multiplying the simple smoothed value by the predicted seasonal component This seasonal component is derived analogous to the S t value from simple exponential smoothing as. Put into words, the predicted seasonal component at time t is computed as the respective seasonal component in the last seasonal cycle plus a portion of the error e t the observed minus the forecast value at time t Considering the formulas above, it is clear that parameter can assume values between 0 and 1 If it is zero, then the seasonal component for a particular point in time is predicted to be identical to the predicted seasonal component for the respective time during the previous seasonal cycle, which in turn is predicted to be identical to that from the previous cycle, and so on Thus, if is zero, a constant unchanging seasonal component is used to generate the one-step-ahead forecasts If the parameter is equal to 1, then the seasonal component is modified maximally at every step by the respective forecast error times 1- which we will ignore for the purpose of this brief introduction In most cases, when seasonality is present in the time series, the optimum parameter will fall somewhere between 0 zero and 1 one. Linear, exponential, and damped trend To remain with the toy example above, the sales for a toy can show a linear upward trend e g each year, sales increase by 1 million dollars , exponential growth e g each year, sales increase by a factor of 1 3 , or a damped trend during the first year sales increase by 1 million dollars during the second year the increase is only 80 over the previous year, i e 800,000 during the next year it is again 80 less than the previous year, i e 800,000 8 640,000 etc Each type of trend leaves a clear signature that can usually be identified in the series shown below in the brief discussion of the different models are icons that illustrate the general patterns In general, the trend factor may change slowly over time, and, again, it may make sense to smooth the trend component with a separate parameter denoted gamma for linear and exponential trend models, and phi for damped trend models. The trend smoothing parameters linear and exponential trend and damped trend Analogous to the seasonal component, when a trend component is included in the exponential smoothing pr ocess, an independent trend component is computed for each time, and modified as a function of the forecast error and the respective parameter If the parameter is 0 zero , than the trend component is constant across all values of the time series and for all forecasts If the parameter is 1, then the trend component is modified maximally from observation to observation by the respective forecast error Parameter values that fall in-between represent mixtures of those two extremes Parameter is a trend modification parameter, and affects how strongly changes in the trend will affect estimates of the trend for subsequent forecasts, that is, how quickly the trend will be damped or increased. Classical Seasonal Decomposition Census Method 1.General Introduction. Suppose you recorded the monthly passenger load on international flights for a period of 12 years see Box Jenkins, 1976 If you plot those data, it is apparent that 1 there appears to be a linear upwards trend in the passenger loads over the years, and 2 there is a recurring pattern or seasonality within each year i e most travel occurs during the summer months, and a minor peak occurs during the December holidays The purpose of the seasonal decomposition method is to isolate those components, that is, to de-compose the series into the trend effect, seasonal effects, and remaining variability The classic technique designed to accomplish this decomposition is known as the Census I method This technique is described and discussed in detail in Makridakis, Wheelwright, and McGee 1983 , and Makridakis and Wheelwright 1989.General model The general idea of seasonal decomposition is straightforward In general, a time series like the one described above can be thought of as consisting of four different components 1 A seasonal component denoted as S t where t stands for the particular point in time 2 a trend component T t , 3 a cyclical component C t , and 4 a random, error, or irregular component I t The difference between a c yclical and a seasonal component is that the latter occurs at regular seasonal intervals, while cyclical factors have usually a longer duration that varies from cycle to cycle In the Census I method, the trend and cyclical components are customarily combined into a trend-cycle component TC t The specific functional relationship between these components can assume different forms However, two straightforward possibilities are that they combine in an additive or a multiplicative fashion. Here X t stands for the observed value of the time series at time t Given some a priori knowledge about the cyclical factors affecting the series e g business cycles , the estimates for the different components can be used to compute forecasts for future observations However, the Exponential smoothing method, which can also incorporate seasonality and trend components, is the preferred technique for forecasting purposes. Additive and multiplicative seasonality Let s consider the difference between an addit ive and multiplicative seasonal component in an example The annual sales of toys will probably peak in the months of November and December, and perhaps during the summer with a much smaller peak when children are on their summer break This seasonal pattern will likely repeat every year Seasonal components can be additive or multiplicative in nature For example, during the month of December the sales for a particular toy may increase by 3 million dollars every year Thus, we could add to our forecasts for every December the amount of 3 million to account for this seasonal fluctuation In this case, the seasonality is additive Alternatively, during the month of December the sales for a particular toy may increase by 40 , that is, increase by a factor of 1 4 Thus, when the sales for the toy are generally weak, then the absolute dollar increase in sales during December will be relatively weak but the percentage will be constant if the sales of the toy are strong, then the absolute dollar in crease in sales will be proportionately greater Again, in this case the sales increase by a certain factor and the seasonal component is thus multiplicative in nature i e the multiplicative seasonal component in this case would be 1 4 In plots of series, the distinguishing characteristic between these two types of seasonal components is that in the additive case, the series shows steady seasonal fluctuations, regardless of the overall level of the series in the multiplicative case, the size of the seasonal fluctuations vary, depending on the overall level of the series. Additive and multiplicative trend-cycle We can extend the previous example to illustrate the additive and multiplicative trend-cycle components In terms of our toy example, a fashion trend may produce a steady increase in sales e g a trend towards more educational toys in general as with the seasonal component, this trend may be additive sales increase by 3 million dollars per year or multiplicative sales increase by 30 , or by a factor of 1 3, annually in nature In addition, cyclical components may impact sales to reiterate, a cyclical component is different from a seasonal component in that it usually is of longer duration, and that it occurs at irregular intervals For example, a particular toy may be particularly hot during a summer season e g a particular doll which is tied to the release of a major children s movie, and is promoted with extensive advertising Again such a cyclical component can effect sales in an additive manner or multiplicative manner. The Seasonal Decomposition Census I standard formulas are shown in Makridakis, Wheelwright, and McGee 1983 , and Makridakis and Wheelwright 1989.Moving average First a moving average is computed for the series, with the moving average window width equal to the length of one season If the length of the season is even, then the user can choose to use either equal weights for the moving average or unequal weights can be used, where the first and last observation in the moving average window are averaged. Ratios or differences In the moving average series, all seasonal within-season variability will be eliminated thus, the differences in additive models or ratios in multiplicative models of the observed and smoothed series will isolate the seasonal component plus irregular component Specifically, the moving average is subtracted from the observed series for additive models or the observed series is divided by the moving average values for multiplicative models. Seasonal components The seasonal component is then computed as the average for additive models or medial average for multiplicative models for each point in the season. The medial average of a set of values is the mean after the smallest and largest values are excluded The resulting values represent the average seasonal component of the series. Seasonally adjusted series The original series can be adjusted by subtracting from it additive models or dividing it by multiplicative models the seasonal component. The resulting series is the seasonally adjusted series i e the seasonal component will be removed. Trend-cycle component Remember that the cyclical component is different from the seasonal component in that it is usually longer than one season, and different cycles can be of different lengths The combined trend and cyclical component can be approximated by applying to the seasonally adjusted series a 5 point centered weighed moving average smoothing transformation with the weights of 1, 2, 3, 2, 1.Random or irregular component Finally, the random or irregular error component can be isolated by subtracting from the seasonally adjusted series additive m odels or dividing the adjusted series by multiplicative models the trend-cycle component. X-11 Census Method II Seasonal Adjustment. The general ideas of seasonal decomposition and adjustment are discussed in the context of the Census I seasonal adjustment method Seasonal Decomposition Census I The Census method II 2 is an extension and refinement of the simple adjustment method Over the years, different versions of the Census method II evolved at the Census Bureau the method that has become most popular and is used most widely in government and business is the so-called X-11 variant of the Census method II see Hiskin, Young, Musgrave, 1967 Subsequently, the term X-11 has become synonymous with this refined version of the Census method II In addition to the documentation that can be obtained from the Census Bureau, a detailed summary of this method is also provided in Makridakis, Wheelwright, and McGee 1983 and Makridakis and Wheelwright 1989.For more information on this method, see the following topics. For more information on other Time Series methods, see Time Series Analysis - Index and the following topics. Seasonal Adjustment Basic Ideas and Terms. Suppose you recorded the monthly passenger load on international flights for a period of 12 years see Box Jenkins, 1976 If you plot those data, it is apparent that 1 there appears to be an upwards linear trend in the passenger loads over the years, and 2 there is a recurring pattern or seasonality within each year i e most travel occurs during the summer months, and a minor peak occurs during the December holidays The purpose of seasonal decomposition and adjustment is to isolate those components, that is, to de-compose the series into the trend effect, seasonal effects, and remaining variability The classic technique designed to accomplish this decomposition was developed in the 1920 s and is also known as the Census I method see the Census I overview section This technique is also described and discussed in detail in M akridakis, Wheelwright, and McGee 1983 , and Makridakis and Wheelwright 1989.General model The general idea of seasonal decomposition is straightforward In general, a time series like the one described above can be thought of as consisting of four different components 1 A seasonal component denoted as S t where t stands for the particular point in time 2 a trend component T t , 3 a cyclical component C t , and 4 a random, error, or irregular component I t The difference between a cyclical and a seasonal component is that the latter occurs at regular seasonal intervals, while cyclical factors usually have a longer duration that varies from cycle to cycle The trend and cyclical components are customarily combined into a trend-cycle component TC t The specific functional relationship between these components can assume different forms However, two straightforward possibilities are that they combine in an additive or a multiplicative fashion. X t represents the observed value of the time se ries at time t. Given some a priori knowledge about the cyclical factors affecting the series e g business cycles , the estimates for the different components can be used to compute forecasts for future observations However, the Exponential smoothing method, which can also incorporate seasonality and trend components, is the preferred technique for forecasting purposes. Additive and multiplicative seasonality Consider the difference between an additive and multiplicative seasonal component in an example The annual sales of toys will probably peak in the months of November and December, and perhaps during the summer with a much smaller peak when children are on their summer break This seasonal pattern will likely repeat every year Seasonal components can be additive or multiplicative in nature For example, during the month of December the sales for a particular toy may increase by 3 million dollars every year Thus, you could add to your forecasts for every December the amount of 3 million to account for this seasonal fluctuation In this case, the seasonality is additive. Alternatively, during the month of December the sales for a particular toy may increase by 40 , that is, increase by a factor of 1 4 Thus, when the sales for the toy are generally weak, then the absolute dollar increase in sales during December will be relatively weak but the percentage will be constant if the sales of the toy are strong, then the absolute dollar increase in sales will be proportionately greater Again, in this case the sales increase by a certain factor and the seasonal component is thus multiplicative in nature i e the multiplicative seasonal component in this case would be 1 4 In plots of series, the distinguishing characteristic between these two types of seasonal components is that in the additive case, the series shows steady seasonal fluctuations, regardless of the overall level of the series in the multiplicative case, the size of the seasonal fluctuations vary, depending on the overall level of the series. Additive and multiplicative trend-cycle The previous example can be extended to illustrate the additive and multiplicative trend-cycle components In terms of the toy example, a fashion trend may produce a steady increase in sales e g a trend towards more educational toys in general as with the seasonal component, this trend may be additive sales increase by 3 million dollars per year or multiplicative sales increase by 30 , or by a factor of 1 3, annually in nature In addition, cyclical components may impact sales To reiterate, a cyclical component is different from a seasonal component in that it usually is of longer duration, and that it occurs at irregular intervals For example, a particular toy may be particularly hot during a summer season e g a particular doll which is tied to the release of a major children s movie, and is promoted with extensive advertising Again such a cyclical component can effect sales in an additive manner or multiplicative man ner. The Census II Method. The basic method for seasonal decomposition and adjustment outlined in the Basic Ideas and Terms topic can be refined in several ways In fact, unlike many other time-series modeling techniques e g ARIMA which are grounded in some theoretical model of an underlying process, the X-11 variant of the Census II method simply contains many ad hoc features and refinements, that over the years have proven to provide excellent estimates for many real-world applications see Burman, 1979, Kendal Ord, 1990, Makridakis Wheelwright, 1989 Wallis, 1974 Some of the major refinements are listed below. Trading-day adjustment Different months have different numbers of days, and different numbers of trading-days i e Mondays, Tuesdays, etc When analyzing, for example, monthly revenue figures for an amusement park, the fluctuation in the different numbers of Saturdays and Sundays peak days in the different months will surely contribute significantly to the variability in monthly reven ues The X-11 variant of the Census II method allows the user to test whether such trading-day variability exists in the series, and, if so, to adjust the series accordingly. Extreme values Most real-world time series contain outliers, that is, extreme fluctuations due to rare events For example, a strike may affect production in a particular month of one year Such extreme outliers may bias the estimates of the seasonal and trend components The X-11 procedure includes provisions to deal with extreme values through the use of statistical control principles, that is, values that are above or below a certain range expressed in terms of multiples of sigma the standard deviation can be modified or dropped before final estimates for the seasonality are computed. Multiple refinements The refinement for outliers, extreme values, and different numbers of trading-days can be applied more than once, in order to obtain successively improved estimates of the components The X-11 method applies a series of successive refinements of the estimates to arrive at the final trend-cycle, seasonal, and irregular components, and the seasonally adjusted series. Tests and summary statistics In addition to estimating the major components of the series, various summary statistics can be computed For example, analysis of variance tables can be prepared to test the significance of seasonal variability and trading-day variability see above in the series the X-11 procedure will also compute the percentage change from month to month in the random and trend-cycle components As the duration or span in terms of months or quarters for quarterly X-11 increases, the change in the trend-cycle component will likely also increase, while the change in the random component should remain about the same The width of the average span at which the changes in the random component are about equal to the changes in the trend-cycle component is called the month quarter for cyclical dominance or MCD QCD for short For exam ple, if the MCD is equal to 2, then you can infer that over a 2-month span the trend-cycle will dominate the fluctuations of the irregular random component These and various other results are discussed in greater detail below. Result Tables Computed by the X-11 Method. The computations performed by the X-11 procedure are best discussed in the context of the results tables that are reported The adjustment process is divided into seven major steps, which are customarily labeled with consecutive letters A through G. Prior adjustment monthly seasonal adjustment only Before any seasonal adjustment is performed on the monthly time series, various prior user - defined adjustments can be incorporated The user can specify a second series that contains prior adjustment factors the values in that series will either be subtracted additive model from the original series, or the original series will be divided by these values multiplicative model For multiplicative models, user-specified trading-day adj ustment weights can also be specified These weights will be used to adjust the monthly observations depending on the number of respective trading-days represented by the observation. Preliminary estimation of trading-day variation monthly X-11 and weights Next, preliminary trading-day adjustment factors monthly X-11 only and weights for reducing the effect of extreme observations are computed. Final estimation of trading-day variation and irregular weights monthly X - 11 The adjustments and weights computed in B above are then used to derive improved trend-cycle and seasonal estimates These improved estimates are used to compute the final trading-day factors monthly X-11 only and weights. Final estimation of seasonal factors, trend-cycle, irregular, and seasonally adjusted series The final trading-day factors and weights computed in C above are used to compute the final estimates of the components. Modified original, seasonally adjusted, and irregular series The original and final seasonall y adjusted series, and the irregular component are modified for extremes The resulting modified series allow the user to examine the stability of the seasonal adjustment. Month quarter for cyclical dominance MCD, QCD , moving average, and summary measures In this part of the computations, various summary measures see below are computed to allow the user to examine the relative importance of the different components, the average fluctuation from month-to-month quarter-to-quarter , the average number of consecutive changes in the same direction average number of runs , etc. Charts Finally, you will compute various charts graphs to summarize the results For example, the final seasonally adjusted series will be plotted, in chronological order, or by month see below. Specific Description of all Result Tables Computed by the X-11 Method. In each part A through G of the analysis see Results Tables Computed by the X-11 Method , different result tables are computed Customarily, these tables are num bered, and also identified by a letter to indicate the respective part of the analysis For example, table B 11 shows the initial seasonally adjusted series C 11 is the refined seasonally adjusted series, and D 11 is the final seasonally adjusted series Shown below is a list of all available tables Those tables identified by an asterisk are not available applicable when analyzing quarterly series Also, for quarterly adjustment, some of the computations outlined below are slightly different for example instead of a 12-term monthly moving average, a 4-term quarterly moving average is applied to compute the seasonal factors the initial trend-cycle estimate is computed via a centered 4-term moving average, the final trend-cycle estimate in each part is computed by a 5-term Henderson average. Following the convention of the Bureau of the Census version of the X-11 method, three levels of printout detail are offered Standard 17 to 27 tables , Long 27 to 39 tables , and Full 44 to 59 tables In the description of each table below, the letters S, L, and F are used next to each title to indicate, which tables will be displayed and or printed at the respective setting of the output option For the charts, two levels of detail are available Standard and All. See the table name below, to obtain more information about that table. A 2 Prior Monthly Adjustment S Factors. Tables B 14 through B 16, B18, and B19 Adjustment for trading-day variation These tables are only available when analyzing monthly series Different months contain different numbers of days of the week i e Mondays, Tuesdays, etc In some series, the variation in the different numbers of trading-days may contribute significantly to monthly fluctuations e g the monthly revenues of an amusement park will be greatly influenced by the number of Saturdays Sundays in each month The user can specify initial weights for each trading-day see A 4 , and or these weights can be estimated from the data the user can also choose to apply those weights conditionally, i e only if they explain a significant proportion of variance. B 14 Extreme Irregular Values Excluded from Trading-day Regression L. B 15 Preliminary Trading-day Regression L. B 16 Trading-day Adjustment Factors Derived from Regression Coefficients F. B 17 Preliminary Weights for Irregular Component L. B 18 Trading-day Factors Derived from Combined Daily Weights F. B 19 Original Series Adjusted for Trading-day and Prior Variation F. C 1 Original Series Modified by Preliminary Weights and Adjusted for Trading-day and Prior Variation L. Tables C 14 through C 16, C 18, and C 19 Adjustment for trading-day variation These tables are only available when analyzing monthly series, and when adjustment for trading-day variation is requested In that case, the trading-day adjustment factors are computed from the refined adjusted series, analogous to the adjustment performed in part B B 14 through B 16, B 18 and B 19. C 14 Extreme Irregular Values Excluded from Trading-day Regression S. C 15 Final Trading-day Regression S. C 16 Final Trading-day Adjustment Factor s Derived from Regression X11 output Coefficients S. C 17 Final Weights for Irregular Component S. C 18 Final Trading-day Factors Derived From Combined Daily Weights S. C 19 Original Series Adjusted for Trading-day and Prior Variation S. D 1 Original Series Modified by Final Weights and Adjusted for Trading-day and Prior Variation L. Distributed Lags Analysis. For more information on other Time Series methods, see Time Series Analysis - Index and the following topics. General Purpose. Distributed lags analysis is a specialized technique for examining the relationships between variables that involve some delay For example, suppose that you are a manufacturer of computer software, and you want to determine the relationship between the number of inquiries that are received, and the number of orders that are placed by your customers You could record those numbers monthly for a one-year period, and then correlate the two variables However, obviously inquiries will precede actual orders, and you c an expect that the number of orders will follow the number of inquiries with some delay Put another way, there will be a time lagged correlation between the number of inquiries and the number of orders that are received. Time-lagged correlations are particularly common in econometrics For example, the benefits of investments in new machinery usually only become evident after some time Higher income will change people s choice of rental apartments, however, this relationship will be lagged because it will take some time for people to terminate their current leases, find new apartments, and move In general, the relationship between capital appropriations and capital expenditures will be lagged, because it will require some time before investment decisions are actually acted upon. In all of these cases, we have an independent or explanatory variable that affects the dependent variables with some lag The distributed lags method allows you to investigate those lags. Detailed discussions of dis tributed lags correlation can be found in most econometrics textbooks, for example, in Judge, Griffith, Hill, Luetkepohl, and Lee 1985 , Maddala 1977 , and Fomby, Hill, and Johnson 1984 In the following paragraphs we will present a brief description of these methods We will assume that you are familiar with the concept of correlation see Basic Statistics , and the basic ideas of multiple regression see Multiple Regression. General Model. Suppose we have a dependent variable y and an independent or explanatory variable x which are both measured repeatedly over time In some textbooks, the dependent variable is also referred to as the endogenous variable, and the independent or explanatory variable the exogenous variable The simplest way to describe the relationship between the two would be in a simple linear relationship. In this equation, the value of the dependent variable at time t is expressed as a linear function of x measured at times t t-1 t-2 , etc Thus, the dependent variable is a linear function of x and x is lagged by 1, 2, etc time periods The beta weights i can be considered slope parameters in this equation You may recognize this equation as a special case of the general linear regression equation see the Multiple Regression overview If the weights for the lagged time periods are statistically significant, we can conclude that the y variable is predicted or explained with the respective lag. Almon Distributed Lag. A common problem that often arises when computing the weights for the multiple linear regression model shown above is that the values of adjacent in time values in the x variable are highly correlated In extreme cases, their independent contributions to the prediction of y may become so redundant that the correlation matrix of measures can no longer be inverted, and thus, the beta weights cannot be computed In less extreme cases, the computation of the beta weights and their standard errors can become very imprecise, due to round-off error In the co ntext of Multiple Regression this general computational problem is discussed as the multicollinearity or matrix ill-conditioning issue. Almon 1965 proposed a procedure that will reduce the multicollinearity in this case Specifically, suppose we express each weight in the linear regression equation in the following manner. Almon could show that in many cases it is easier i e it avoids the multicollinearity problem to estimate the alpha values than the beta weights directly Note that with this method, the precision of the beta weight estimates is dependent on the degree or order of the polynomial approximation. Misspecifications A general problem with this technique is that, of course, the lag length and correct polynomial degree are not known a priori The effects of misspecifications of these parameters are potentially serious in terms of biased estimation This issue is discussed in greater detail in Frost 1975 , Schmidt and Waud 1973 , Schmidt and Sickles 1975 , and Trivedi and Pagan 1979.Single Spectrum Fourier Analysis. Spectrum analysis is concerned with the exploration of cyclical patterns of data The purpose of the analysis is to decompose a complex time series with cyclical components into a few underlying sinusoidal sine and cosine functions of particular wavelengths The term spectrum provides an appropriate metaphor for the nature of this analysis Suppose you study a beam of white sun light, which at first looks like a random white noise accumulation of light of different wavelengths However, when put through a prism, we can separate the different wave lengths or cyclical components that make up white sun light In fact, via this technique we can now identify and distinguish between different sources of light Thus, by identifying the important underlying cyclical components, we have learned something about the phenomenon of interest In essence, performing spectrum analysis on a time series is like putting the series through a prism in order to identify the wave l engths and importance of underlying cyclical components As a result of a successful analysis, you might uncover just a few recurring cycles of different lengths in the time series of interest, which at first looked more or less like random noise. A much cited example for spectrum analysis is the cyclical nature of sun spot activity e g see Bloomfield, 1976, or Shumway, 1988 It turns out that sun spot activity varies over 11 year cycles Other examples of celestial phenomena, weather patterns, fluctuations in commodity prices, economic activity, etc are also often used in the literature to demonstrate this technique To contrast this technique with ARIMA or Exponential Smoothing the purpose of spectrum analysis is to identify the seasonal fluctuations of different lengths, while in the former types of analysis, the length of the seasonal component is usually known or guessed a priori and then included in some theoretical model of moving averages or autocorrelations. The classic text on spec trum analysis is Bloomfield 1976 however, other detailed discussions can be found in Jenkins and Watts 1968 , Brillinger 1975 , Brigham 1974 , Elliott and Rao 1982 , Priestley 1981 , Shumway 1988 , or Wei 1989.For more information, see Time Series Analysis - Index and the following topics. Cross-Spectrum Analysis. For more information, see Time Series Analysis - Index and the following topics. General Introduction. Cross-spectrum analysis is an extension of Single Spectrum Fourier Analysis to the simultaneous analysis of two series In the following paragraphs, we will assume that you have already read the introduction to single spectrum analysis Detailed discussions of this technique can be found in Bloomfield 1976 , Jenkins and Watts 1968 , Brillinger 1975 , Brigham 1974 , Elliott and Rao 1982 , Priestley 1981 , Shumway 1988 , or Wei 1989.Strong periodicity in the series at the respective frequency A much cited example for spectrum analysis is the cyclical nature of sun spot activity e g see Bloomfield, 1976, or Shumway, 1988 It turns out that sun spot activity varies over 11 year cycles Other examples of celestial phenomena, weather patterns, fluctuations in commodity prices, economic activity, etc are also often used in the literature to demonstrate this technique. The purpose of cross-spectrum analysis is to uncover the correlations between two series at different frequencies For example, sun spot activity may be related to weather phenomena here on earth If so, then if we were to record those phenomena e g yearly average temperature and submit the resulting series to a cross-spectrum analysis together with the sun spot data, we may find that the weather indeed correlates with the sunspot activity at the 11 year cycle That is, we may find a periodicity in the weather data that is in-sync with the sun spot cycles We can easily think of other areas of research where such knowledge could be very useful for example, various economic indicators may show similar correlate d cyclical behavior various physiological measures likely will also display coordinated i e correlated cyclical behavior, and so on. Basic Notation and Principles. A simple example Consider the following two series with 16 cases. Results for Each Variable. The complete summary contains all spectrum statistics computed for each variable, as described in the Single Spectrum Fourier Analysis overview section Looking at the results shown above, it is clear that both variables show strong periodicities at the frequencies 0625 and 1875.Cross-Periodogram, Cross-Density, Quadrature-Density, Cross-Amplitude. Analogous to the results for the single variables, the complete summary will also display periodogram values for the cross periodogram However, the cross-spectrum consists of complex numbers that can be divided into a real and an imaginary part These can be smoothed to obtain the cross-density and quadrature density quad density for short estimates, respectively The reasons for smoothing, and th e different common weight functions for smoothing are discussed in the Single Spectrum Fourier Analysis The square root of the sum of the squared cross-density and quad-density values is called the cross - amplitude The cross-amplitude can be interpreted as a measure of covariance between the respective frequency components in the two series Thus we can conclude from the results shown in the table above that the 0625 and 1875 frequency components in the two series covary. Squared Coherency, Gain, and Phase Shift. There are additional statistics that can be displayed in the complete summary. Squared coherency You can standardize the cross-amplitude values by squaring them and dividing by the product of the spectrum density estimates for each series The result is called the squared coherency which can be interpreted similar to the squared correlation coefficient see Correlations - Overview , that is, the coherency value is the squared correlation between the cyclical components in the two se ries at the respective frequency However, the coherency values should not be interpreted by themselves for example, when the spectral density estimates in both series are very small, large coherency values may result the divisor in the computation of the coherency values will be very small , even though there are no strong cyclical components in either series at the respective frequencies. Gain The gain value is computed by dividing the cross-amplitude value by the spectrum density estimates for one of the two series in the analysis Consequently, two gain values are computed, which can be interpreted as the standard least squares regression coefficients for the respective frequencies. Phase shift Finally, the phase shift estimates are computed as tan -1 of the ratio of the quad density estimates over the cross-density estimate The phase shift estimates usually denoted by the Greek letter are measures of the extent to which each frequency component of one series leads the other. How the Ex ample Data were Created. Now, let s return to the example data set presented above The large spectral density estimates for both series, and the cross-amplitude values at frequencies 0 0625 and 1875 suggest two strong synchronized periodicities in both series at those frequencies In fact, the two series were created as. v1 cos 2 0625 v0-1 75 sin 2 2 v0-1.v2 cos 2 0625 v0 2 75 sin 2 2 v0 2. where v0 is the case number Indeed, the analysis presented in this overview reproduced the periodicity inserted into the data very well. Spectrum Analysis - Basic Notation and Principles. For more information, see Time Series Analysis - Index and the following topics. Frequency and Period. The wave length of a sine or cosine function is typically expressed in terms of the number of cycles per unit time Frequency , often denoted by the Greek letter nu some textbooks also use f For example, the number of letters handled in a post office may show 12 cycles per year On the first of every month a large amount of mail is sent many bills come due on the first of the month , then the amount of mail decreases in the middle of the month, then it increases again towards the end of the month Therefore, every month the fluctuation in the amount of mail handled by the post office will go through a full cycle Thus, if the unit of analysis is one year, then n would be equal to 12, as there would be 12 cycles per year Of course, there will likely be other cycles with different frequencies For example, there might be annual cycles 1 , and perhaps weekly cycles 52 weeks per year. The period T of a sine or cosine function is defined as the length of time required for one full cycle Thus, it is the reciprocal of the frequency, or T 1 To return to the mail example in the previous paragraph, the monthly cycle, expressed in yearly terms, would be equal to 1 12 0 0833 Put into words, there is a period in the series of length 0 0833 years. The General Structural Model. As mentioned before, the purpose of spectrum analysis is to decompose the original series into underlying sine and cosine functions of different frequencies, in order to determine those that appear particularly strong or important One way to do so would be to cast the issue as a linear Multiple Regression problem, where the dependent variable is the observed time series, and the independent variables are the sine functions of all possible discrete frequencies Such a linear multiple regression model can be written as. Following the common notation from classical harmonic analysis, in this equation lambda is the frequency expressed in terms of radians per unit time, that is 2 k where is the constant pi 3 14 and k k q What is important here is to recognize that the computational problem of fitting sine and cosine functions of different lengths to the data can be considered in terms of multiple linear regression Note that the cosine parameters a k and sine parameters b k are regression coefficients that tell us the degree to which the respective functions are correlated with the data Overall there are q different sine and cosine functions intuitively as also discussed in Multiple Regression , it should be clear that we cannot have more sine and cosine functions than there are data points in the series Without going into detail, if there are N data points in the series, then there will be N 2 1 cosine functions and N 2-1 sine functions In other words, there will be as many different sinusoidal waves as there are data points, and we will be able to completely reproduce the series from the underlying functions Note that if the number of cases in the series is odd, then the last data point will usually be ignored in order for a sinusoidal function to be identified, you need at least two points the high peak and the low peak. To summarize, spectrum analysis will identify the correlation of sine and cosine functions of different frequency with the observed data If a large correlation sine or cosine coefficient is identifi ed, you can conclude that there is a strong periodicity of the respective frequency or period in the dataplex numbers real and imaginary numbers In many textbooks on spectrum analysis, the structural model shown above is presented in terms of complex numbers, that is, the parameter estimation process is described in terms of the Fourier transform of a series into real and imaginary parts Complex numbers are the superset that includes all real and imaginary numbers Imaginary numbers, by definition, are numbers that are multiplied by the constant i where i is defined as the square root of -1 Obviously, the square root of -1 does not exist, hence the term imaginary number however, meaningful arithmetic operations on imaginary numbers can still be performed e g i 2 2 -4 It is useful to think of real and imaginary numbers as forming a two dimensional plane, where the horizontal or X - axis represents all real numbers, and the vertical or Y - axis represents all imaginary numbers Complex numbe rs can then be represented as points in the two - dimensional plane For example, the complex number 3 i 2 can be represented by a point with coordinates in this plane You can also think of complex numbers as angles, for example, you can connect the point representing a complex number in the plane with the origin complex number 0 i 0 , and measure the angle of that vector to the horizontal line Thus, intuitively you can see how the spectrum decomposition formula shown above, consisting of sine and cosine functions, can be rewritten in terms of operations on complex numbers In fact, in this manner the mathematical discussion and required computations are often more elegant and easier to perform which is why many textbooks prefer the presentation of spectrum analysis in terms of complex numbers. A Simple Example. Shumway 1988 presents a simple example to clarify the underlying mechanics of spectrum analysis Let s create a series with 16 cases following the equation shown above, and then see how we may extract the information that was put in it First, create a variable and define it as. x 1 cos 2 0625 v0-1 75 sin 2 2 v0-1.This variable is made up of two underlying periodicities The first at the frequency of 0625 or period 1 16 one observation completes 1 16 th of a full cycle, and a full cycle is completed every 16 observations and the second at the frequency of 2 or period of 5 The cosine coefficient 1 0 is larger than the sine coefficient 75 The spectrum analysis summary is shown below. Let s now review the columns Clearly, the largest cosine coefficient can be found for the 0625 frequency A smaller sine coefficient can be found at frequency 1875 Thus, clearly the two sine cosine frequencies which were inserted into the example data file are reflected in the above table. The sine and cosine functions are mutually independent or orthogonal thus we may sum the squared coefficients for each frequency to obtain the periodogram Specifically, the periodogram values above are com puted as. P k sine coefficient k 2 cosine coefficient k 2 N 2.where P k is the periodogram value at frequency k and N is the overall length of the series The periodogram values can be interpreted in terms of variance sums of squares of the data at the respective frequency or period Customarily, the periodogram values are plotted against the frequencies or periods. The Problem of Leakage. In the example above, a sine function with a frequency of 0 2 was inserted into the series However, because of the length of the series 16 , none of the frequencies reported exactly hits on that frequency In practice, what often happens in those cases is that the respective frequency will leak into adjacent frequencies For example, you may find large periodogram values for two adjacent frequencies, when, in fact, there is only one strong underlying sine or cosine function at a frequency that falls in-between those implied by the length of the series There are three ways in which we can approach the proble m of leakage. By padding the series, we may apply a finer frequency roster to the data. By tapering the series prior to the analysis, we may reduce leakage, or. By smoothing the periodogram, we may identify the general frequency regions or spectral densities that significantly contribute to the cyclical behavior of the series. See below for descriptions of each of these approaches. Padding the Time Series. Because the frequency values are computed as N t the number of units of times , we can simply pad the series with a constant e g zeros and thereby introduce smaller increments in the frequency values In a sense, padding allows us to apply a finer roster to the data In fact, if we padded the example data file described in the example above with ten zeros, the results would not change, that is, the largest periodogram peaks would still occur at the frequency values closest to 0625 and 2 Padding is also often desirable for computational efficiency reasons see below. The so-called process of sp lit-cosine-bell tapering is a recommended transformation of the series prior to the spectrum analysis It usually leads to a reduction of leakage in the periodogram The rationale for this transformation is explained in detail in Bloomfield 1976, p 80-94 In essence, a proportion p of the data at the beginning and at the end of the series is transformed via multiplication by the weights. where m is chosen so that 2 m N is equal to the proportion of data to be tapered p. Data Windows and Spectral Density Estimates. In practice, when analyzing actual data, it is usually not of crucial importance to identify exactly the frequencies for particular underlying sine or cosine functions Rather, because the periodogram values are subject to substantial random fluctuation, we are faced with the problem of very many chaotic periodogram spikes In that case, we want to find the frequencies with the greatest spectral densities that is, the frequency regions, consisting of many adjacent frequencies, that c ontribute most to the overall periodic behavior of the series This can be accomplished by smoothing the periodogram values via a weighted moving average transformation Suppose the moving average window is of width m which must be an odd number the following are the most commonly used smoothers note p m-1 2.Daniell or equal weight window The Daniell window Daniell 1946 amounts to a simple equal weight moving average transformation of the periodogram values, that is, each spectral density estimate is computed as the mean of the m 2 preceding and subsequent periodogram values. Tukey window In the Tukey Blackman and Tukey, 1958 or Tukey-Hanning window named after Julius Von Hann , for each frequency, the weights for the weighted moving average of the periodogram values are computed as. Hamming window In the Hamming named after R W Hamming window or Tukey-Hamming window Blackman and Tukey, 1958 , for each frequency, the weights for the weighted moving average of the periodogram values are co mputed as. Parzen window In the Parzen window Parzen, 1961 , for each frequency, the weights for the weighted moving average of the periodogram values are computed as. Bartlett window In the Bartlett window Bartlett, 1950 the weights are computed as. With the exception of the Daniell window, all weight functions will assign the greatest weight to the observation being smoothed in the center of the window, and increasingly smaller weights to values that are further away from the center In many cases, all of these data windows will produce very similar results. Preparing the Data for Analysis. Let s now consider a few other practical points in spectrum analysis Usually, we want to subtract the mean from the series, and detrend the series so that it is stationary prior to the analysis Otherwise the periodogram and density spectrum will mostly be overwhelmed by a very large value for the first cosine coefficient for frequency 0 0 In a sense, the mean is a cycle of frequency 0 zero per unit time that is, it is a constant Similarly, a trend is also of little interest when we want to uncover the periodicities in the series In fact, both of those potentially strong effects may mask the more interesting periodicities in the data, and thus both the mean and the trend linear should be removed from the series prior to the analysis Sometimes, it is also useful to smooth the data prior to the analysis, in order to tame the random noise that may obscure meaningful periodic cycles in the periodogram. Results when No Periodicity in the Series Exists. Finally, what if there are no recurring cycles in the data, that is, if each observation is completely independent of all other observations If the distribution of the observations follows the normal distribution, such a time series is also referred to as a white noise series like the white noise you hear on the radio when tuned in-between stations A white noise input series will result in periodogram values that follow an exponential distribu tion Thus, by testing the distribution of periodogram values against the exponential distribution, you can test whether the input series is different from a white noise series In addition, then you can also request to compute the Kolmogorov-Smirnov one-sample d statistic see also Nonparametrics and Distributions for more details. Testing for white noise in certain frequency bands Note that you can also plot the periodogram values for a particular frequency range only Again, if the input is a white noise series with respect to those frequencies i e it there are no significant periodic cycles of those frequencies , then the distribution of the periodogram values should again follow an exponential distribution. Fast Fourier Transforms FFT. For more information, see Time Series Analysis - Index and the following topics. General Introduction. The interpretation of the results of spectrum analysis is discussed in the Basic Notation and Principles topic, however, we have not described how it is do ne computationally Up until the mid-1960s the standard way of performing the spectrum decomposition was to use explicit formulae to solve for the sine and cosine parameters The computations involved required at least N 2 complex multiplications Thus, even with today s high-speed computers it would be very time consuming to analyze even small time series e g 8,000 observations would result in at least 64 million multiplications. The time requirements changed drastically with the development of the so-called fast Fourier transform algorithm or FFT for short In the mid-1960s, J W Cooley and J W Tukey 1965 popularized this algorithm which, in retrospect, had in fact been discovered independently by various individuals Various refinements and improvements of this algorithm can be found in Monro 1975 and Monro and Branch 1976 Readers interested in the computational details of this algorithm may refer to any of the texts cited in the overview Suffice it to say that via the FFT algorithm, the t ime to perform a spectral analysis is proportional to N log2 N - a huge improvement. However, a draw-back of the standard FFT algorithm is that the number of cases in the series must be equal to a power of 2 i e 16, 64, 128, 256 Usually, this necessitated padding of the series, which, as described above, will in most cases not change the characteristic peaks of the periodogram or the spectral density estimates In cases, however, where the time units are meaningful, such padding may make the interpretation of results more cumbersomeputation of FFT in Time Series. The implementation of the FFT algorithm allows you to take full advantage of the savings afforded by this algorithm On most standard computers, series with over 100,000 cases can easily be analyzed However, there are a few things to remember when analyzing series of that size. As mentioned above, the standard and most efficient FFT algorithm requires that the length of the input series is equal to a power of 2 If this is not the c ase, additional computations have to be performed It will use the simple explicit computational formulas as long as the input series is relatively small, and the number of computations can be performed in a relatively short amount of time For long time series, in order to still utilize the FFT algorithm, an implementation of the general approach described by Monro and Branch 1976 is used This method requires significantly more storage space, however, series of considerable length can still be analyzed very quickly, even if the number of observations is not equal to a power of 2.For time series of lengths not equal to a power of 2, we would like to make the following recommendations If the input series is small to moderately sized e g only a few thousand cases , then do not worry The analysis will typically only take a few seconds anyway In order to analyze moderately large and large series e g over 100,000 cases , pad the series to a power of 2 and then taper the series during the expl oratory part of your data analysis. Was this topic helpful.
Comments
Post a Comment