6 Conceptually, package loading in the runner can be imagined as a
7 graph-shaped work list. We iteratively pop off leaf nodes (packages
8 that have no unloaded dependencies) and load data from export data,
11 Specifically, non-initial packages are loaded from export data and the
12 fact cache if possible, otherwise from source. Initial packages are
13 loaded from export data, the fact cache and the (problems, ignores,
14 config) cache if possible, otherwise from source.
16 The appeal of this approach is that it is both simple to implement and
17 easily parallelizable. Each leaf node can be processed independently,
18 and new leaf nodes appear as their dependencies are being processed.
20 The downside of this approach, however, is that we're doing more work
21 than necessary. Imagine an initial package A, which has the following
22 dependency chain: A->B->C->D – in the current implementation, we will
23 load all 4 packages. However, if package A can be loaded fully from
24 cached information, then none of its dependencies are necessary, and
25 we could avoid loading them.
30 Runner implements parallel processing of packages by spawning one
31 goroutine per package in the dependency graph, without any semaphores.
32 Each goroutine initially waits on the completion of all of its
33 dependencies, thus establishing correct order of processing. Once all
34 dependencies finish processing, the goroutine will load the package
35 from export data or source – this loading is guarded by a semaphore,
36 sized according to the number of CPU cores. This way, we only have as
37 many packages occupying memory and CPU resources as there are actual
38 cores to process them.
40 This combination of unbounded goroutines but bounded package loading
41 means that if we have many parallel, independent subgraphs, they will
42 all execute in parallel, while not wasting resources for long linear
43 chains or trying to process more subgraphs in parallel than the system
49 We make use of several caches. These caches are Go's export data, our
50 facts cache, and our (problems, ignores, config) cache.
52 Initial packages will either be loaded from a combination of all three
53 caches, or from source. Non-initial packages will either be loaded
54 from a combination of export data and facts cache, or from source.
56 The facts cache is separate from the (problems, ignores, config) cache
57 because when we process non-initial packages, we generate facts, but
58 we discard problems and ignores.
60 The facts cache is keyed by (package, analyzer), whereas the
61 (problems, ignores, config) cache is keyed by (package, list of
62 analyzes). The difference between the two exists because there are
63 only a handful of analyses that produce facts, but hundreds of
64 analyses that don't. Creating one cache entry per fact-generating
65 analysis is feasible, creating one cache entry per normal analysis has
66 significant performance and storage overheads.
68 The downside of keying by the list of analyzes is, naturally, that a
69 change in list of analyzes changes the cache key. `staticcheck -checks
70 A` and `staticcheck -checks A,B` will therefore need their own cache
71 entries and not reuse each other's work. This problem does not affect
94 "golang.org/x/tools/go/analysis"
95 "golang.org/x/tools/go/packages"
96 "golang.org/x/tools/go/types/objectpath"
97 "honnef.co/go/tools/config"
98 "honnef.co/go/tools/facts"
99 "honnef.co/go/tools/internal/cache"
100 "honnef.co/go/tools/loader"
104 gob.Register(&FileIgnore{})
105 gob.Register(&LineIgnore{})
108 // If enabled, abuse of the go/analysis API will lead to panics
109 const sanityCheck = true
111 // OPT(dh): for a dependency tree A->B->C->D, if we have cached data
112 // for B, there should be no need to load C and D individually. Go's
113 // export data for B contains all the data we need on types, and our
114 // fact cache could store the union of B, C and D in B.
116 // This may change unused's behavior, however, as it may observe fewer
117 // interfaces from transitive dependencies.
119 // OPT(dh): every single package will have the same value for
120 // canClearTypes. We could move the Package.decUse method to runner to
121 // eliminate this field. This is probably not worth it, though. There
122 // are only thousands of packages, so the field only takes up
123 // kilobytes of memory.
125 // OPT(dh): do we really need the Package.gen field? it's based
126 // trivially on pkg.results and merely caches the result of a type
127 // assertion. How often do we actually use the field?
129 type Package struct {
130 // dependents is initially set to 1 plus the number of packages
131 // that directly import this package. It is atomically decreased
132 // by 1 every time a dependent has been processed or when the
133 // package itself has been processed. Once the value reaches zero,
134 // the package is no longer needed.
140 // fromSource is set to true for packages that have been loaded
141 // from source. This is the case for initial packages, packages
142 // with missing export data, and packages with no cached facts.
144 // hash stores the package hash, as computed by packageHash
146 actionID cache.ActionID
150 // results maps analyzer IDs to analyzer results. it is
151 // implemented as a deduplicating concurrent cache.
155 // gen maps file names to the code generator that created them
156 gen map[string]facts.Generator
161 // these slices are indexed by analysis
162 facts []map[types.Object][]analysis.Fact
163 pkgFacts [][]analysis.Fact
165 // canClearTypes is set to true if we can discard type
166 // information after the package and its dependents have been
167 // processed. This is the case when no cumulative checkers are
172 type cachedPackage struct {
175 Config *config.Config
178 func (pkg *Package) decUse() {
179 ret := atomic.AddUint64(&pkg.dependents, ^uint64(0))
181 // nobody depends on this package anymore
182 if pkg.canClearTypes {
188 for _, imp := range pkg.Imports {
206 analyzerIDs analyzerIDs
207 problemsCacheKey string
209 // limits parallelism of loading packages
210 loadSem chan struct{}
213 type analyzerIDs struct {
214 m map[*analysis.Analyzer]int
217 func (ids analyzerIDs) get(a *analysis.Analyzer) int {
220 panic(fmt.Sprintf("no analyzer ID for %s", a.Name))
230 type analysisAction struct {
231 analyzer *analysis.Analyzer
234 newPackageFacts []analysis.Fact
237 pkgFacts map[*types.Package][]analysis.Fact
240 func (ac *analysisAction) String() string {
241 return fmt.Sprintf("%s @ %s", ac.analyzer, ac.pkg)
244 func (ac *analysisAction) allObjectFacts() []analysis.ObjectFact {
245 out := make([]analysis.ObjectFact, 0, len(ac.pkg.facts[ac.analyzerID]))
246 for obj, facts := range ac.pkg.facts[ac.analyzerID] {
247 for _, fact := range facts {
248 out = append(out, analysis.ObjectFact{
257 func (ac *analysisAction) allPackageFacts() []analysis.PackageFact {
258 out := make([]analysis.PackageFact, 0, len(ac.pkgFacts))
259 for pkg, facts := range ac.pkgFacts {
260 for _, fact := range facts {
261 out = append(out, analysis.PackageFact{
270 func (ac *analysisAction) importObjectFact(obj types.Object, fact analysis.Fact) bool {
271 if sanityCheck && len(ac.analyzer.FactTypes) == 0 {
272 panic("analysis doesn't export any facts")
274 for _, f := range ac.pkg.facts[ac.analyzerID][obj] {
275 if reflect.TypeOf(f) == reflect.TypeOf(fact) {
276 reflect.ValueOf(fact).Elem().Set(reflect.ValueOf(f).Elem())
283 func (ac *analysisAction) importPackageFact(pkg *types.Package, fact analysis.Fact) bool {
284 if sanityCheck && len(ac.analyzer.FactTypes) == 0 {
285 panic("analysis doesn't export any facts")
287 for _, f := range ac.pkgFacts[pkg] {
288 if reflect.TypeOf(f) == reflect.TypeOf(fact) {
289 reflect.ValueOf(fact).Elem().Set(reflect.ValueOf(f).Elem())
296 func (ac *analysisAction) exportObjectFact(obj types.Object, fact analysis.Fact) {
297 if sanityCheck && len(ac.analyzer.FactTypes) == 0 {
298 panic("analysis doesn't export any facts")
300 ac.pkg.facts[ac.analyzerID][obj] = append(ac.pkg.facts[ac.analyzerID][obj], fact)
303 func (ac *analysisAction) exportPackageFact(fact analysis.Fact) {
304 if sanityCheck && len(ac.analyzer.FactTypes) == 0 {
305 panic("analysis doesn't export any facts")
307 ac.pkgFacts[ac.pkg.Types] = append(ac.pkgFacts[ac.pkg.Types], fact)
308 ac.newPackageFacts = append(ac.newPackageFacts, fact)
311 func (ac *analysisAction) report(pass *analysis.Pass, d analysis.Diagnostic) {
313 Pos: DisplayPosition(pass.Fset, d.Pos),
314 End: DisplayPosition(pass.Fset, d.End),
316 Check: pass.Analyzer.Name,
318 for _, r := range d.Related {
319 p.Related = append(p.Related, Related{
320 Pos: DisplayPosition(pass.Fset, r.Pos),
321 End: DisplayPosition(pass.Fset, r.End),
325 ac.problems = append(ac.problems, p)
328 func (r *Runner) runAnalysis(ac *analysisAction) (ret interface{}, err error) {
329 ac.pkg.resultsMu.Lock()
330 res := ac.pkg.results[r.analyzerIDs.get(ac.analyzer)]
332 ac.pkg.resultsMu.Unlock()
334 return res.v, res.err
337 ready: make(chan struct{}),
339 ac.pkg.results[r.analyzerIDs.get(ac.analyzer)] = res
340 ac.pkg.resultsMu.Unlock()
348 pass := new(analysis.Pass)
349 *pass = analysis.Pass{
350 Analyzer: ac.analyzer,
352 Files: ac.pkg.Syntax,
353 // type information may be nil or may be populated. if it is
354 // nil, it will get populated later.
356 TypesInfo: ac.pkg.TypesInfo,
357 TypesSizes: ac.pkg.TypesSizes,
358 ResultOf: map[*analysis.Analyzer]interface{}{},
359 ImportObjectFact: ac.importObjectFact,
360 ImportPackageFact: ac.importPackageFact,
361 ExportObjectFact: ac.exportObjectFact,
362 ExportPackageFact: ac.exportPackageFact,
363 Report: func(d analysis.Diagnostic) {
366 AllObjectFacts: ac.allObjectFacts,
367 AllPackageFacts: ac.allPackageFacts,
371 // Don't report problems in dependencies
372 pass.Report = func(analysis.Diagnostic) {}
374 return r.runAnalysisUser(pass, ac)
378 func (r *Runner) loadCachedPackage(pkg *Package, analyzers []*analysis.Analyzer) (cachedPackage, bool) {
379 // OPT(dh): we can cache this computation, it'll be the same for all packages
380 id := cache.Subkey(pkg.actionID, "data "+r.problemsCacheKey)
382 b, _, err := r.cache.GetBytes(id)
384 return cachedPackage{}, false
386 var cpkg cachedPackage
387 if err := gob.NewDecoder(bytes.NewReader(b)).Decode(&cpkg); err != nil {
388 return cachedPackage{}, false
393 func (r *Runner) loadCachedFacts(a *analysis.Analyzer, pkg *Package) ([]Fact, bool) {
394 if len(a.FactTypes) == 0 {
399 // Look in the cache for facts
400 aID := passActionID(pkg, a)
401 aID = cache.Subkey(aID, "facts")
402 b, _, err := r.cache.GetBytes(aID)
404 // No cached facts, analyse this package like a user-provided one, but ignore diagnostics
408 if err := gob.NewDecoder(bytes.NewReader(b)).Decode(&facts); err != nil {
409 // Cached facts are broken, analyse this package like a user-provided one, but ignore diagnostics
415 type dependencyError struct {
420 func (err dependencyError) nested() dependencyError {
421 if o, ok := err.err.(dependencyError); ok {
427 func (err dependencyError) Error() string {
428 if o, ok := err.err.(dependencyError); ok {
431 return fmt.Sprintf("error running dependency %s: %s", err.dep, err.err)
434 func (r *Runner) makeAnalysisAction(a *analysis.Analyzer, pkg *Package) *analysisAction {
435 aid := r.analyzerIDs.get(a)
436 ac := &analysisAction{
442 if len(a.FactTypes) == 0 {
446 // Merge all package facts of dependencies
447 ac.pkgFacts = map[*types.Package][]analysis.Fact{}
448 seen := map[*Package]struct{}{}
449 var dfs func(*Package)
450 dfs = func(pkg *Package) {
451 if _, ok := seen[pkg]; ok {
454 seen[pkg] = struct{}{}
455 s := pkg.pkgFacts[aid]
456 ac.pkgFacts[pkg.Types] = s[0:len(s):len(s)]
457 for _, imp := range pkg.Imports {
466 // analyzes that we always want to run, even if they're not being run
467 // explicitly or as dependencies. these are necessary for the inner
468 // workings of the runner.
469 var injectedAnalyses = []*analysis.Analyzer{facts.Generated, config.Analyzer}
471 func (r *Runner) runAnalysisUser(pass *analysis.Pass, ac *analysisAction) (interface{}, error) {
472 if !ac.pkg.fromSource {
473 panic(fmt.Sprintf("internal error: %s was not loaded from source", ac.pkg))
476 // User-provided package, analyse it
477 // First analyze it with dependencies
478 for _, req := range ac.analyzer.Requires {
479 acReq := r.makeAnalysisAction(req, ac.pkg)
480 ret, err := r.runAnalysis(acReq)
482 // We couldn't run a dependency, no point in going on
483 return nil, dependencyError{req.Name, err}
486 pass.ResultOf[req] = ret
489 // Then with this analyzer
491 for i := uint(0); i < r.repeatAnalyzers+1; i++ {
494 ret, err = ac.analyzer.Run(pass)
495 r.stats.MeasureAnalyzer(ac.analyzer, ac.pkg, time.Since(t))
501 if len(ac.analyzer.FactTypes) > 0 {
502 // Merge new facts into the package and persist them.
504 for _, fact := range ac.newPackageFacts {
505 id := r.analyzerIDs.get(ac.analyzer)
506 ac.pkg.pkgFacts[id] = append(ac.pkg.pkgFacts[id], fact)
507 facts = append(facts, Fact{"", fact})
509 for obj, afacts := range ac.pkg.facts[ac.analyzerID] {
510 if obj.Pkg() != ac.pkg.Package.Types {
513 path, err := objectpath.For(obj)
517 for _, fact := range afacts {
518 facts = append(facts, Fact{string(path), fact})
522 if err := r.cacheData(facts, ac.pkg, ac.analyzer, "facts"); err != nil {
530 func (r *Runner) cacheData(v interface{}, pkg *Package, a *analysis.Analyzer, subkey string) error {
531 buf := &bytes.Buffer{}
532 if err := gob.NewEncoder(buf).Encode(v); err != nil {
535 aID := passActionID(pkg, a)
536 aID = cache.Subkey(aID, subkey)
537 if err := r.cache.PutBytes(aID, buf.Bytes()); err != nil {
543 func NewRunner(stats *Stats) (*Runner, error) {
544 cache, err := cache.Default()
555 // Run loads packages corresponding to patterns and analyses them with
556 // analyzers. It returns the loaded packages, which contain reported
557 // diagnostics as well as extracted ignore directives.
559 // Note that diagnostics have not been filtered at this point yet, to
560 // accommodate cumulative analyzes that require additional steps to
561 // produce diagnostics.
562 func (r *Runner) Run(cfg *packages.Config, patterns []string, analyzers []*analysis.Analyzer, hasCumulative bool) ([]*Package, error) {
563 checkerNames := make([]string, len(analyzers))
564 for i, a := range analyzers {
565 checkerNames[i] = a.Name
567 sort.Strings(checkerNames)
568 r.problemsCacheKey = strings.Join(checkerNames, " ")
570 var allAnalyzers []*analysis.Analyzer
571 r.analyzerIDs = analyzerIDs{m: map[*analysis.Analyzer]int{}}
573 seen := map[*analysis.Analyzer]struct{}{}
574 var dfs func(a *analysis.Analyzer)
575 dfs = func(a *analysis.Analyzer) {
576 if _, ok := seen[a]; ok {
580 allAnalyzers = append(allAnalyzers, a)
581 r.analyzerIDs.m[a] = id
583 for _, f := range a.FactTypes {
586 for _, req := range a.Requires {
590 for _, a := range analyzers {
591 if v := a.Flags.Lookup("go"); v != nil {
592 v.Value.Set(fmt.Sprintf("1.%d", r.goVersion))
596 for _, a := range injectedAnalyses {
599 // Run all analyzers on all packages (subject to further
600 // restrictions enforced later). This guarantees that if analyzer
601 // A1 depends on A2, and A2 has facts, that A2 will run on the
602 // dependencies of user-provided packages, even though A1 won't.
603 analyzers = allAnalyzers
605 var dcfg packages.Config
610 atomic.StoreUint32(&r.stats.State, StateGraph)
611 initialPkgs, err := loader.Graph(dcfg, patterns...)
617 var allPkgs []*Package
618 m := map[*packages.Package]*Package{}
619 packages.Visit(initialPkgs, nil, func(l *packages.Package) {
622 results: make([]*result, len(r.analyzerIDs.m)),
623 facts: make([]map[types.Object][]analysis.Fact, len(r.analyzerIDs.m)),
624 pkgFacts: make([][]analysis.Fact, len(r.analyzerIDs.m)),
625 done: make(chan struct{}),
626 // every package needs itself
628 canClearTypes: !hasCumulative,
630 allPkgs = append(allPkgs, m[l])
631 for i := range m[l].facts {
632 m[l].facts[i] = map[types.Object][]analysis.Fact{}
634 for _, err := range l.Errors {
635 m[l].errs = append(m[l].errs, err)
637 for _, v := range l.Imports {
639 m[l].Imports = append(m[l].Imports, m[v])
642 m[l].hash, err = r.packageHash(m[l])
643 m[l].actionID = packageActionID(m[l])
645 m[l].errs = append(m[l].errs, err)
649 pkgs := make([]*Package, len(initialPkgs))
650 for i, l := range initialPkgs {
652 pkgs[i].initial = true
655 atomic.StoreUint32(&r.stats.InitialPackages, uint32(len(initialPkgs)))
656 atomic.StoreUint32(&r.stats.TotalPackages, uint32(len(allPkgs)))
657 atomic.StoreUint32(&r.stats.State, StateProcessing)
659 var wg sync.WaitGroup
661 r.loadSem = make(chan struct{}, runtime.GOMAXPROCS(-1))
662 atomic.StoreUint32(&r.stats.TotalWorkers, uint32(cap(r.loadSem)))
663 for _, pkg := range allPkgs {
666 r.processPkg(pkg, analyzers)
669 atomic.AddUint32(&r.stats.ProcessedInitialPackages, 1)
671 atomic.AddUint32(&r.stats.Problems, uint32(len(pkg.problems)))
680 var posRe = regexp.MustCompile(`^(.+?):(\d+)(?::(\d+)?)?`)
682 func parsePos(pos string) (token.Position, int, error) {
683 if pos == "-" || pos == "" {
684 return token.Position{}, 0, nil
686 parts := posRe.FindStringSubmatch(pos)
688 return token.Position{}, 0, fmt.Errorf("malformed position %q", pos)
691 line, _ := strconv.Atoi(parts[2])
692 col, _ := strconv.Atoi(parts[3])
693 return token.Position{
697 }, len(parts[0]), nil
700 // loadPkg loads a Go package. It may be loaded from a combination of
701 // caches, or from source.
702 func (r *Runner) loadPkg(pkg *Package, analyzers []*analysis.Analyzer) error {
703 if pkg.Types != nil {
704 panic(fmt.Sprintf("internal error: %s has already been loaded", pkg.Package))
708 // Try to load cached package
709 cpkg, ok := r.loadCachedPackage(pkg, analyzers)
711 pkg.problems = cpkg.Problems
712 pkg.ignores = cpkg.Ignores
713 pkg.cfg = cpkg.Config
715 pkg.fromSource = true
716 return loader.LoadFromSource(pkg.Package)
720 // At this point we're either working with a non-initial package,
721 // or we managed to load cached problems for the package. We still
722 // need export data and facts.
724 // OPT(dh): we don't need type information for this package if no
725 // other package depends on it. this may be the case for initial
728 // Load package from export data
729 if err := loader.LoadFromExport(pkg.Package); err != nil {
730 // We asked Go to give us up to date export data, yet
731 // we can't load it. There must be something wrong.
733 // Attempt loading from source. This should fail (because
734 // otherwise there would be export data); we just want to
735 // get the compile errors. If loading from source succeeds
736 // we discard the result, anyway. Otherwise we'll fail
737 // when trying to reload from export data later.
739 // FIXME(dh): we no longer reload from export data, so
740 // theoretically we should be able to continue
741 pkg.fromSource = true
742 if err := loader.LoadFromSource(pkg.Package); err != nil {
745 // Make sure this package can't be imported successfully
746 pkg.Package.Errors = append(pkg.Package.Errors, packages.Error{
748 Msg: fmt.Sprintf("could not load export data: %s", err),
749 Kind: packages.ParseError,
751 return fmt.Errorf("could not load export data: %s", err)
755 seen := make([]bool, len(r.analyzerIDs.m))
756 var dfs func(*analysis.Analyzer)
757 dfs = func(a *analysis.Analyzer) {
758 if seen[r.analyzerIDs.get(a)] {
761 seen[r.analyzerIDs.get(a)] = true
763 if len(a.FactTypes) > 0 {
764 facts, ok := r.loadCachedFacts(a, pkg)
770 for _, f := range facts {
772 // This is a package fact
773 pkg.pkgFacts[r.analyzerIDs.get(a)] = append(pkg.pkgFacts[r.analyzerIDs.get(a)], f.Fact)
776 obj, err := objectpath.Object(pkg.Types, objectpath.Path(f.Path))
778 // Be lenient about these errors. For example, when
779 // analysing io/ioutil from source, we may get a fact
780 // for methods on the devNull type, and objectpath
781 // will happily create a path for them. However, when
782 // we later load io/ioutil from export data, the path
783 // no longer resolves.
785 // If an exported type embeds the unexported type,
786 // then (part of) the unexported type will become part
787 // of the type information and our path will resolve
791 pkg.facts[r.analyzerIDs.get(a)][obj] = append(pkg.facts[r.analyzerIDs.get(a)][obj], f.Fact)
795 for _, req := range a.Requires {
799 for _, a := range analyzers {
807 // We failed to load some cached facts
808 pkg.fromSource = true
809 // XXX we added facts to the maps, we need to get rid of those
810 return loader.LoadFromSource(pkg.Package)
813 type analysisError struct {
814 analyzer *analysis.Analyzer
819 func (err analysisError) Error() string {
820 return fmt.Sprintf("error running analyzer %s on %s: %s", err.analyzer, err.pkg, err.err)
823 // processPkg processes a package. This involves loading the package,
824 // either from export data or from source. For packages loaded from
825 // source, the provides analyzers will be run on the package.
826 func (r *Runner) processPkg(pkg *Package, analyzers []*analysis.Analyzer) {
828 // Clear information we no longer need. Make sure to do this
829 // when returning from processPkg so that we clear
830 // dependencies, not just initial packages.
835 atomic.AddUint32(&r.stats.ProcessedPackages, 1)
840 // Ensure all packages have the generated map and config. This is
841 // required by internals of the runner. Analyses that themselves
842 // make use of either have an explicit dependency so that other
843 // runners work correctly, too.
844 analyzers = append(analyzers[0:len(analyzers):len(analyzers)], injectedAnalyses...)
846 if len(pkg.errs) != 0 {
850 for _, imp := range pkg.Imports {
852 if len(imp.errs) > 0 {
854 // Don't print the error of the dependency since it's
855 // an initial package and we're already printing the
857 pkg.errs = append(pkg.errs, fmt.Errorf("could not analyze dependency %s of %s", imp, pkg))
860 for _, err := range imp.errs {
861 s += "\n\t" + err.Error()
863 pkg.errs = append(pkg.errs, fmt.Errorf("could not analyze dependency %s of %s: %s", imp, pkg, s))
868 if pkg.PkgPath == "unsafe" {
869 pkg.Types = types.Unsafe
873 r.loadSem <- struct{}{}
874 atomic.AddUint32(&r.stats.ActiveWorkers, 1)
877 atomic.AddUint32(&r.stats.ActiveWorkers, ^uint32(0))
879 if err := r.loadPkg(pkg, analyzers); err != nil {
880 pkg.errs = append(pkg.errs, err)
884 // A package's object facts is the union of all of its dependencies.
885 for _, imp := range pkg.Imports {
886 for ai, m := range imp.facts {
887 for obj, facts := range m {
888 pkg.facts[ai][obj] = facts[0:len(facts):len(facts)]
894 // Nothing left to do for the package.
898 // Run analyses on initial packages and those missing facts
899 var wg sync.WaitGroup
900 wg.Add(len(analyzers))
901 errs := make([]error, len(analyzers))
902 var acs []*analysisAction
903 for i, a := range analyzers {
906 ac := r.makeAnalysisAction(a, pkg)
907 acs = append(acs, ac)
910 // Only initial packages and packages with missing
911 // facts will have been loaded from source.
912 if pkg.initial || len(a.FactTypes) > 0 {
913 if _, err := r.runAnalysis(ac); err != nil {
914 errs[i] = analysisError{a, pkg, err}
922 depErrors := map[dependencyError]int{}
923 for _, err := range errs {
927 switch err := err.(type) {
929 switch err := err.err.(type) {
930 case dependencyError:
931 depErrors[err.nested()]++
933 pkg.errs = append(pkg.errs, err)
936 pkg.errs = append(pkg.errs, err)
939 for err, count := range depErrors {
940 pkg.errs = append(pkg.errs,
941 fmt.Errorf("could not run %s@%s, preventing %d analyzers from running: %s", err.dep, pkg, count, err.err))
944 // We can't process ignores at this point because `unused` needs
945 // to see more than one package to make its decision.
947 // OPT(dh): can't we guard this block of code by pkg.initial?
948 ignores, problems := parseDirectives(pkg.Package)
949 pkg.ignores = append(pkg.ignores, ignores...)
950 pkg.problems = append(pkg.problems, problems...)
951 for _, ac := range acs {
952 pkg.problems = append(pkg.problems, ac.problems...)
956 // Only initial packages have these analyzers run, and only
957 // initial packages need these.
958 if pkg.results[r.analyzerIDs.get(config.Analyzer)].v != nil {
959 pkg.cfg = pkg.results[r.analyzerIDs.get(config.Analyzer)].v.(*config.Config)
961 pkg.gen = pkg.results[r.analyzerIDs.get(facts.Generated)].v.(map[string]facts.Generator)
964 // In a previous version of the code, we would throw away all type
965 // information and reload it from export data. That was
966 // nonsensical. The *types.Package doesn't keep any information
967 // live that export data wouldn't also. We only need to discard
968 // the AST and the TypesInfo maps; that happens after we return
972 func parseDirective(s string) (cmd string, args []string) {
973 if !strings.HasPrefix(s, "//lint:") {
976 s = strings.TrimPrefix(s, "//lint:")
977 fields := strings.Split(s, " ")
978 return fields[0], fields[1:]
981 // parseDirectives extracts all linter directives from the source
982 // files of the package. Malformed directives are returned as problems.
983 func parseDirectives(pkg *packages.Package) ([]Ignore, []Problem) {
985 var problems []Problem
987 for _, f := range pkg.Syntax {
990 for _, cg := range f.Comments {
991 for _, c := range cg.List {
992 if strings.Contains(c.Text, "//lint:") {
1001 cm := ast.NewCommentMap(pkg.Fset, f, f.Comments)
1002 for node, cgs := range cm {
1003 for _, cg := range cgs {
1004 for _, c := range cg.List {
1005 if !strings.HasPrefix(c.Text, "//lint:") {
1008 cmd, args := parseDirective(c.Text)
1010 case "ignore", "file-ignore":
1013 Pos: DisplayPosition(pkg.Fset, c.Pos()),
1014 Message: "malformed linter directive; missing the required reason field?",
1018 problems = append(problems, p)
1022 // unknown directive, ignore
1025 checks := strings.Split(args[0], ",")
1026 pos := DisplayPosition(pkg.Fset, node.Pos())
1034 Pos: DisplayPosition(pkg.Fset, c.Pos()),
1042 ignores = append(ignores, ig)
1048 return ignores, problems
1051 // packageHash computes a package's hash. The hash is based on all Go
1052 // files that make up the package, as well as the hashes of imported
1054 func (r *Runner) packageHash(pkg *Package) (string, error) {
1055 key := cache.NewHash("package hash")
1056 fmt.Fprintf(key, "pkgpath %s\n", pkg.PkgPath)
1057 fmt.Fprintf(key, "go %d\n", r.goVersion)
1058 for _, f := range pkg.CompiledGoFiles {
1059 h, err := cache.FileHash(f)
1063 fmt.Fprintf(key, "file %s %x\n", f, h)
1066 // Actually load the configuration to calculate its hash. This
1067 // will take into consideration inheritance of configuration
1068 // files, as well as the default configuration.
1070 // OPT(dh): doing this means we'll load the config twice: once for
1071 // computing the hash, and once when analyzing the package from
1073 cdir := config.Dir(pkg.GoFiles)
1075 fmt.Fprintf(key, "file %s %x\n", config.ConfigName, [cache.HashSize]byte{})
1077 cfg, err := config.Load(cdir)
1081 h := cache.NewHash(config.ConfigName)
1082 if _, err := h.Write([]byte(cfg.String())); err != nil {
1085 fmt.Fprintf(key, "file %s %x\n", config.ConfigName, h.Sum())
1088 imps := make([]*Package, len(pkg.Imports))
1089 copy(imps, pkg.Imports)
1090 sort.Slice(imps, func(i, j int) bool {
1091 return imps[i].PkgPath < imps[j].PkgPath
1093 for _, dep := range imps {
1094 if dep.PkgPath == "unsafe" {
1098 fmt.Fprintf(key, "import %s %s\n", dep.PkgPath, dep.hash)
1101 return hex.EncodeToString(h[:]), nil
1104 func packageActionID(pkg *Package) cache.ActionID {
1105 key := cache.NewHash("package ID")
1106 fmt.Fprintf(key, "pkgpath %s\n", pkg.PkgPath)
1107 fmt.Fprintf(key, "pkghash %s\n", pkg.hash)
1111 // passActionID computes an ActionID for an analysis pass.
1112 func passActionID(pkg *Package, analyzer *analysis.Analyzer) cache.ActionID {
1113 return cache.Subkey(pkg.actionID, fmt.Sprintf("analyzer %s", analyzer.Name))